Jan 24 07:41:00 crc systemd[1]: Starting Kubernetes Kubelet... Jan 24 07:41:00 crc restorecon[4673]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:00 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 07:41:01 crc restorecon[4673]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 24 07:41:01 crc kubenswrapper[4705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 07:41:01 crc kubenswrapper[4705]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 24 07:41:01 crc kubenswrapper[4705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 07:41:01 crc kubenswrapper[4705]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 07:41:01 crc kubenswrapper[4705]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 24 07:41:01 crc kubenswrapper[4705]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.445840 4705 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450895 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450929 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450937 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450944 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450952 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450959 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450967 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450973 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450980 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450987 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.450994 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451000 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451007 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451013 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451020 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451027 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451034 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451040 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451046 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451053 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451079 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451087 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451094 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451102 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451109 4705 feature_gate.go:330] unrecognized feature gate: Example Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451115 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451122 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451128 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451135 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451141 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451148 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451155 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451164 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451172 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451180 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451187 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451195 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451202 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451209 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451215 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451222 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451229 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451235 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451242 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451249 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451256 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451265 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451271 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451279 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451286 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451292 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451298 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451305 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451315 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451322 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451329 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451335 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451343 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451349 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451355 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451362 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451369 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451375 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451385 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451396 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451406 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451414 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451421 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451428 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451437 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.451446 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451764 4705 flags.go:64] FLAG: --address="0.0.0.0" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451782 4705 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451795 4705 flags.go:64] FLAG: --anonymous-auth="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451805 4705 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451815 4705 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451844 4705 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451855 4705 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451865 4705 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451873 4705 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451881 4705 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451889 4705 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451898 4705 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451907 4705 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451914 4705 flags.go:64] FLAG: --cgroup-root="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451922 4705 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451930 4705 flags.go:64] FLAG: --client-ca-file="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451937 4705 flags.go:64] FLAG: --cloud-config="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451945 4705 flags.go:64] FLAG: --cloud-provider="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451954 4705 flags.go:64] FLAG: --cluster-dns="[]" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451963 4705 flags.go:64] FLAG: --cluster-domain="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451971 4705 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451979 4705 flags.go:64] FLAG: --config-dir="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451988 4705 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.451997 4705 flags.go:64] FLAG: --container-log-max-files="5" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452007 4705 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452014 4705 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452022 4705 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452030 4705 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452038 4705 flags.go:64] FLAG: --contention-profiling="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452046 4705 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452054 4705 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452063 4705 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452071 4705 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452081 4705 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452089 4705 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452096 4705 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452103 4705 flags.go:64] FLAG: --enable-load-reader="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452111 4705 flags.go:64] FLAG: --enable-server="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452119 4705 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452129 4705 flags.go:64] FLAG: --event-burst="100" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452137 4705 flags.go:64] FLAG: --event-qps="50" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452144 4705 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452152 4705 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452160 4705 flags.go:64] FLAG: --eviction-hard="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452170 4705 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452177 4705 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452220 4705 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452229 4705 flags.go:64] FLAG: --eviction-soft="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452238 4705 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452246 4705 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452253 4705 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452261 4705 flags.go:64] FLAG: --experimental-mounter-path="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452269 4705 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452277 4705 flags.go:64] FLAG: --fail-swap-on="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452286 4705 flags.go:64] FLAG: --feature-gates="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452296 4705 flags.go:64] FLAG: --file-check-frequency="20s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452304 4705 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452313 4705 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452321 4705 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452329 4705 flags.go:64] FLAG: --healthz-port="10248" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452337 4705 flags.go:64] FLAG: --help="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452345 4705 flags.go:64] FLAG: --hostname-override="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452352 4705 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452360 4705 flags.go:64] FLAG: --http-check-frequency="20s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452368 4705 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452380 4705 flags.go:64] FLAG: --image-credential-provider-config="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452388 4705 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452396 4705 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452404 4705 flags.go:64] FLAG: --image-service-endpoint="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452411 4705 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452419 4705 flags.go:64] FLAG: --kube-api-burst="100" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452426 4705 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452434 4705 flags.go:64] FLAG: --kube-api-qps="50" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452442 4705 flags.go:64] FLAG: --kube-reserved="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452449 4705 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452457 4705 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452465 4705 flags.go:64] FLAG: --kubelet-cgroups="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452472 4705 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452480 4705 flags.go:64] FLAG: --lock-file="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452487 4705 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452495 4705 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452503 4705 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452514 4705 flags.go:64] FLAG: --log-json-split-stream="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452523 4705 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452530 4705 flags.go:64] FLAG: --log-text-split-stream="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452538 4705 flags.go:64] FLAG: --logging-format="text" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452545 4705 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452553 4705 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452561 4705 flags.go:64] FLAG: --manifest-url="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452569 4705 flags.go:64] FLAG: --manifest-url-header="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452580 4705 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452588 4705 flags.go:64] FLAG: --max-open-files="1000000" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452597 4705 flags.go:64] FLAG: --max-pods="110" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452605 4705 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452613 4705 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452620 4705 flags.go:64] FLAG: --memory-manager-policy="None" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452628 4705 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452639 4705 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452647 4705 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452655 4705 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452672 4705 flags.go:64] FLAG: --node-status-max-images="50" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452680 4705 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452688 4705 flags.go:64] FLAG: --oom-score-adj="-999" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452696 4705 flags.go:64] FLAG: --pod-cidr="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452703 4705 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452714 4705 flags.go:64] FLAG: --pod-manifest-path="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452722 4705 flags.go:64] FLAG: --pod-max-pids="-1" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452729 4705 flags.go:64] FLAG: --pods-per-core="0" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452737 4705 flags.go:64] FLAG: --port="10250" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452744 4705 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452752 4705 flags.go:64] FLAG: --provider-id="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452760 4705 flags.go:64] FLAG: --qos-reserved="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452767 4705 flags.go:64] FLAG: --read-only-port="10255" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452775 4705 flags.go:64] FLAG: --register-node="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452783 4705 flags.go:64] FLAG: --register-schedulable="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452791 4705 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452804 4705 flags.go:64] FLAG: --registry-burst="10" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452811 4705 flags.go:64] FLAG: --registry-qps="5" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452841 4705 flags.go:64] FLAG: --reserved-cpus="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452850 4705 flags.go:64] FLAG: --reserved-memory="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452859 4705 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452867 4705 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452875 4705 flags.go:64] FLAG: --rotate-certificates="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452883 4705 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452890 4705 flags.go:64] FLAG: --runonce="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452898 4705 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452907 4705 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452916 4705 flags.go:64] FLAG: --seccomp-default="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452923 4705 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452931 4705 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452943 4705 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452951 4705 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452959 4705 flags.go:64] FLAG: --storage-driver-password="root" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452967 4705 flags.go:64] FLAG: --storage-driver-secure="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452974 4705 flags.go:64] FLAG: --storage-driver-table="stats" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452981 4705 flags.go:64] FLAG: --storage-driver-user="root" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452989 4705 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.452997 4705 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453005 4705 flags.go:64] FLAG: --system-cgroups="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453013 4705 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453025 4705 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453032 4705 flags.go:64] FLAG: --tls-cert-file="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453040 4705 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453048 4705 flags.go:64] FLAG: --tls-min-version="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453055 4705 flags.go:64] FLAG: --tls-private-key-file="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453063 4705 flags.go:64] FLAG: --topology-manager-policy="none" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453070 4705 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453078 4705 flags.go:64] FLAG: --topology-manager-scope="container" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453085 4705 flags.go:64] FLAG: --v="2" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453095 4705 flags.go:64] FLAG: --version="false" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453104 4705 flags.go:64] FLAG: --vmodule="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453113 4705 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.453121 4705 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.453508 4705 feature_gate.go:330] unrecognized feature gate: Example Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.453526 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454358 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454477 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454699 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454707 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454711 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454718 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454722 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454726 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454731 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454735 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454739 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454742 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454746 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454750 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454754 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454757 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454763 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454769 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454774 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454780 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454784 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454788 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454793 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454796 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454800 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454804 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454808 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454811 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454815 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454839 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454843 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454848 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454852 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454856 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454861 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454869 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454873 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454876 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454880 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454884 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454888 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454891 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454897 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454901 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454905 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454909 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454914 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454919 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454923 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454927 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454931 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454934 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454938 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454941 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454945 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454949 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454954 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454958 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454962 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454966 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454969 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454973 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454976 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454980 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454983 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454988 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.454997 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.455001 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.455005 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.455012 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.461965 4705 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.461994 4705 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462060 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462066 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462071 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462075 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462080 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462084 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462089 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462095 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462100 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462104 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462108 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462112 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462116 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462120 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462123 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462127 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462131 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462135 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462140 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462145 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462149 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462152 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462156 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462160 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462164 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462168 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462171 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462176 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462180 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462183 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462187 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462190 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462194 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462199 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462202 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462206 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462209 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462213 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462217 4705 feature_gate.go:330] unrecognized feature gate: Example Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462220 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462223 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462227 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462231 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462235 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462238 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462242 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462245 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462249 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462253 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462257 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462261 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462264 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462269 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462274 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462278 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462282 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462286 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462291 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462295 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462299 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462303 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462307 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462310 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462314 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462317 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462321 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462324 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462328 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462332 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462336 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462340 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.462346 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462495 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462504 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462508 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462512 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462516 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462520 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462523 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462527 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462531 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462535 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462539 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462543 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462547 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462550 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462554 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462558 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462561 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462565 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462569 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462573 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462576 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462580 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462584 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462588 4705 feature_gate.go:330] unrecognized feature gate: Example Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462592 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462595 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462599 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462602 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462606 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462610 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462614 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462617 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462622 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462627 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462631 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462635 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462639 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462642 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462646 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462649 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462653 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462656 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462660 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462663 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462668 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462672 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462676 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462680 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462685 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462689 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462694 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462698 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462703 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462707 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462711 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462715 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462718 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462722 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462726 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462729 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462733 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462736 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462740 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462743 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462747 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462751 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462754 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462758 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462762 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462766 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.462769 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.462775 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.463179 4705 server.go:940] "Client rotation is on, will bootstrap in background" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.465394 4705 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.465472 4705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.466027 4705 server.go:997] "Starting client certificate rotation" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.466046 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.466351 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-24 21:44:02.130501918 +0000 UTC Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.466469 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.471587 4705 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.473232 4705 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.474178 4705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.483215 4705 log.go:25] "Validated CRI v1 runtime API" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.498801 4705 log.go:25] "Validated CRI v1 image API" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.500540 4705 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.502604 4705 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-24-07-36-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.502634 4705 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.518780 4705 manager.go:217] Machine: {Timestamp:2026-01-24 07:41:01.517003704 +0000 UTC m=+0.236877012 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8dcf50f1-4fbc-440d-b092-936c9603c61c BootID:c57bc973-aee3-462e-9560-e18c43dd1277 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:e1:bf Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:e1:bf Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:17:53:d7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1e:6f:a9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:94:96:3c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:de:0a:68 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:3f:fe:a5:af:b6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:fd:5a:4a:8d:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.519076 4705 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.519310 4705 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.519730 4705 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.519906 4705 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.519934 4705 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.520134 4705 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.520144 4705 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.520328 4705 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.520360 4705 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.520640 4705 state_mem.go:36] "Initialized new in-memory state store" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.520724 4705 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.521529 4705 kubelet.go:418] "Attempting to sync node with API server" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.521795 4705 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.521845 4705 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.521862 4705 kubelet.go:324] "Adding apiserver pod source" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.521875 4705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.523410 4705 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.523726 4705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.524430 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.524520 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.524537 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.524617 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.524700 4705 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525231 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525253 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525262 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525271 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525284 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525293 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525301 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525315 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525324 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525334 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525346 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525354 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.525890 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.526254 4705 server.go:1280] "Started kubelet" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.526443 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.526549 4705 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.526555 4705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.527315 4705 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 07:41:01 crc systemd[1]: Started Kubernetes Kubelet. Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.529143 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.530035 4705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.529509 4705 server.go:460] "Adding debug handlers to kubelet server" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.529739 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.15:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d9ad36f84fcc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 07:41:01.52623636 +0000 UTC m=+0.246109648,LastTimestamp:2026-01-24 07:41:01.52623636 +0000 UTC m=+0.246109648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.534886 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:12:56.365870061 +0000 UTC Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.535941 4705 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.535958 4705 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.536076 4705 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.536234 4705 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.536938 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="200ms" Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.537054 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.537696 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.538945 4705 factory.go:55] Registering systemd factory Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.539687 4705 factory.go:221] Registration of the systemd container factory successfully Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.540019 4705 factory.go:153] Registering CRI-O factory Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.540061 4705 factory.go:221] Registration of the crio container factory successfully Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.540131 4705 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.540153 4705 factory.go:103] Registering Raw factory Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.540168 4705 manager.go:1196] Started watching for new ooms in manager Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.540952 4705 manager.go:319] Starting recovery of all containers Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544147 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544201 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544217 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544230 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544244 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544256 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544268 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544279 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544294 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544305 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544317 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544330 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544345 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544359 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544370 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544381 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544392 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544404 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544415 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544426 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544437 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544447 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544459 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544471 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544484 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544497 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544511 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544524 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544537 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544550 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544562 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544574 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544587 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544600 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544613 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544626 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544638 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544650 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544662 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544675 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544689 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544702 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544715 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544727 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544738 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544752 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544765 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544778 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544791 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544802 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544833 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544848 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544864 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544878 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544890 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544904 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544918 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544929 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544941 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544952 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544965 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544976 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.544988 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545000 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545013 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545024 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545037 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545049 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545061 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545072 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545085 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545096 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545109 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545119 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545130 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545143 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545154 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545165 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545178 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545188 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545199 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545211 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545223 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545233 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545243 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545254 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545265 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545276 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545288 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545298 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545308 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545319 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545329 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545339 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545349 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545361 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545370 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545381 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545391 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545401 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545413 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545423 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545435 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545447 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545464 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545475 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545487 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545501 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545513 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545525 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545536 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545577 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545588 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545601 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545614 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545624 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545636 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545647 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545657 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545668 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545680 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545691 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545701 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545712 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545723 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545733 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545745 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545755 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545766 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545778 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545788 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545868 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545886 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545898 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545911 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545922 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545933 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545945 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545956 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545968 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545979 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545989 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.545999 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546011 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546021 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546032 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546043 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546053 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546064 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546077 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546088 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546100 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546111 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546122 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546132 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546145 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546155 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546168 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546180 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546191 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546203 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546214 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546224 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546235 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546245 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546257 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546269 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546280 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546295 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546305 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546316 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546329 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546341 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546354 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546365 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546376 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546386 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546397 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546412 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546422 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546432 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546443 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546453 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546464 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546477 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546494 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546507 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546518 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546528 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546541 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546552 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546562 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546574 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546585 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546598 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546613 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546627 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546642 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546655 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546667 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546679 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546690 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546701 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546712 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546723 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546734 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.546747 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.547352 4705 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.547372 4705 reconstruct.go:97] "Volume reconstruction finished" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.547390 4705 reconciler.go:26] "Reconciler: start to sync state" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.555635 4705 manager.go:324] Recovery completed Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.564322 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.568995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.569053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.569066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.570276 4705 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.570401 4705 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.570542 4705 state_mem.go:36] "Initialized new in-memory state store" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.572132 4705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.574361 4705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.574395 4705 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.574418 4705 kubelet.go:2335] "Starting kubelet main sync loop" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.574455 4705 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 07:41:01 crc kubenswrapper[4705]: W0124 07:41:01.576545 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.576776 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.578528 4705 policy_none.go:49] "None policy: Start" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.579128 4705 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.579157 4705 state_mem.go:35] "Initializing new in-memory state store" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.637210 4705 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639006 4705 manager.go:334] "Starting Device Plugin manager" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639052 4705 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639067 4705 server.go:79] "Starting device plugin registration server" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639426 4705 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639441 4705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639575 4705 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639663 4705 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.639672 4705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.645108 4705 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.675246 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.675395 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.676376 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.676407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.676417 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.676533 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.676718 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.676751 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.677088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.677120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.677128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.677288 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.677456 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.677522 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.677988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678105 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678273 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678307 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678378 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678386 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678390 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678399 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678779 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.678787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679268 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679334 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679368 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679891 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679920 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679973 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.679982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.680456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.680477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.680487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.738643 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="400ms" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.739649 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.740603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.740747 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.740892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.741203 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.741767 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.750936 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.750971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751000 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751024 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751044 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751069 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751090 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751110 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751142 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751193 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751213 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751259 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751277 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.751292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.851879 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.851946 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.851981 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852015 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852045 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852044 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852094 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852119 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852199 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852247 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852281 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852317 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852346 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852347 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852389 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852391 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852407 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852362 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852387 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852418 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852449 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852460 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852483 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852480 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.852597 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.942679 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.944564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.944600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.944611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.944633 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 07:41:01 crc kubenswrapper[4705]: E0124 07:41:01.945115 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Jan 24 07:41:01 crc kubenswrapper[4705]: I0124 07:41:01.997430 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.024290 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.026756 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0c00026c0c73f48e79c5e781f248748c3e2c36ac07d1d0f88513c59aa43694f4 WatchSource:0}: Error finding container 0c00026c0c73f48e79c5e781f248748c3e2c36ac07d1d0f88513c59aa43694f4: Status 404 returned error can't find the container with id 0c00026c0c73f48e79c5e781f248748c3e2c36ac07d1d0f88513c59aa43694f4 Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.030263 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.035590 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.058809 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.084461 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c80313786f12792187c42f5d90fd19be4719a9f59aa66febd41398b4a011a430 WatchSource:0}: Error finding container c80313786f12792187c42f5d90fd19be4719a9f59aa66febd41398b4a011a430: Status 404 returned error can't find the container with id c80313786f12792187c42f5d90fd19be4719a9f59aa66febd41398b4a011a430 Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.094459 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ac04f1870e321420229fd4397a2e4458110e8d3a37a8741afb1f635fa2724a5b WatchSource:0}: Error finding container ac04f1870e321420229fd4397a2e4458110e8d3a37a8741afb1f635fa2724a5b: Status 404 returned error can't find the container with id ac04f1870e321420229fd4397a2e4458110e8d3a37a8741afb1f635fa2724a5b Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.099531 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0d4793c5c935792cd97b6789fffff8c57d4cb9bebccb6b5924a1a9888e789409 WatchSource:0}: Error finding container 0d4793c5c935792cd97b6789fffff8c57d4cb9bebccb6b5924a1a9888e789409: Status 404 returned error can't find the container with id 0d4793c5c935792cd97b6789fffff8c57d4cb9bebccb6b5924a1a9888e789409 Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.104151 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-bf919608799d23b7d34c655b17d30ab9bbc74233937b05b5ea85fc560ab88b8f WatchSource:0}: Error finding container bf919608799d23b7d34c655b17d30ab9bbc74233937b05b5ea85fc560ab88b8f: Status 404 returned error can't find the container with id bf919608799d23b7d34c655b17d30ab9bbc74233937b05b5ea85fc560ab88b8f Jan 24 07:41:02 crc kubenswrapper[4705]: E0124 07:41:02.140485 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="800ms" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.346195 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.347709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.347746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.347755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.347778 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 07:41:02 crc kubenswrapper[4705]: E0124 07:41:02.348313 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.527264 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.535267 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:53:35.592374799 +0000 UTC Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.578647 4705 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383" exitCode=0 Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.578706 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.578796 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0c00026c0c73f48e79c5e781f248748c3e2c36ac07d1d0f88513c59aa43694f4"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.578936 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.580030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.580056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.580066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.580357 4705 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9" exitCode=0 Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.580427 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.580453 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bf919608799d23b7d34c655b17d30ab9bbc74233937b05b5ea85fc560ab88b8f"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.580535 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.581702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.581734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.581745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.583082 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.583123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d4793c5c935792cd97b6789fffff8c57d4cb9bebccb6b5924a1a9888e789409"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.584629 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044" exitCode=0 Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.584701 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.584727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac04f1870e321420229fd4397a2e4458110e8d3a37a8741afb1f635fa2724a5b"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.584845 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.585583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.585611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.585620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.586216 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8" exitCode=0 Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.586253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.586276 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c80313786f12792187c42f5d90fd19be4719a9f59aa66febd41398b4a011a430"} Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.586375 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.587319 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.587558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.587588 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.587597 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.588218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.588248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:02 crc kubenswrapper[4705]: I0124 07:41:02.588261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.803345 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:02 crc kubenswrapper[4705]: E0124 07:41:02.803441 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.813436 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:02 crc kubenswrapper[4705]: E0124 07:41:02.813487 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:02 crc kubenswrapper[4705]: E0124 07:41:02.942308 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="1.6s" Jan 24 07:41:02 crc kubenswrapper[4705]: W0124 07:41:02.950804 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:02 crc kubenswrapper[4705]: E0124 07:41:02.950957 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:03 crc kubenswrapper[4705]: W0124 07:41:03.090685 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:03 crc kubenswrapper[4705]: E0124 07:41:03.091033 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.148478 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.149799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.149874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.149886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.149918 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 07:41:03 crc kubenswrapper[4705]: E0124 07:41:03.150526 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.527354 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.535372 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:44:12.328514986 +0000 UTC Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.557800 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 07:41:03 crc kubenswrapper[4705]: E0124 07:41:03.558785 4705 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.590774 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba"} Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.593091 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03"} Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.594726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704"} Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.596358 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5"} Jan 24 07:41:03 crc kubenswrapper[4705]: I0124 07:41:03.597736 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95"} Jan 24 07:41:04 crc kubenswrapper[4705]: W0124 07:41:04.483854 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:04 crc kubenswrapper[4705]: E0124 07:41:04.483923 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.528004 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.536160 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:51:23.705560126 +0000 UTC Jan 24 07:41:04 crc kubenswrapper[4705]: E0124 07:41:04.543926 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="3.2s" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.601773 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5" exitCode=0 Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.601863 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5"} Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.601969 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.602600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.602619 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.602628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.606231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402"} Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.609242 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc"} Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.611697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212"} Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.611737 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.612471 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.612510 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.612523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.750955 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.752153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.752198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.752208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:04 crc kubenswrapper[4705]: I0124 07:41:04.752231 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 07:41:04 crc kubenswrapper[4705]: E0124 07:41:04.752606 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Jan 24 07:41:04 crc kubenswrapper[4705]: W0124 07:41:04.770806 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:04 crc kubenswrapper[4705]: E0124 07:41:04.771014 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:05 crc kubenswrapper[4705]: W0124 07:41:05.355551 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Jan 24 07:41:05 crc kubenswrapper[4705]: E0124 07:41:05.355715 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.536240 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:33:05.005041885 +0000 UTC Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.615621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d"} Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.615650 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.616618 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.616677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.616688 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.617681 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240"} Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.617732 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.618554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.618596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.618610 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.620057 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3"} Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.620082 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37"} Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.620093 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29"} Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.621387 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db" exitCode=0 Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.621416 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db"} Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.621501 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.622024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.622052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:05 crc kubenswrapper[4705]: I0124 07:41:05.622064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.536561 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:52:16.536110192 +0000 UTC Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626191 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f"} Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626228 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3"} Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626238 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d"} Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626248 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e"} Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626238 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626238 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626804 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.626864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627312 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:06 crc kubenswrapper[4705]: I0124 07:41:06.627947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.452318 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.536745 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:25:19.648432925 +0000 UTC Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.633915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed"} Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.633987 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.634043 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.634051 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.634050 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.637634 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.637686 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.637704 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.637959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.637996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.638007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.638172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.638277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.638291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.660856 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.953524 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.954809 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.954859 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.954868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:07 crc kubenswrapper[4705]: I0124 07:41:07.954892 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.537778 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:13:16.926685272 +0000 UTC Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.636532 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.636569 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.637582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.637620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.637635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.638434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.638490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:08 crc kubenswrapper[4705]: I0124 07:41:08.638501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:09 crc kubenswrapper[4705]: I0124 07:41:09.538947 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:59:57.604675174 +0000 UTC Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.194890 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.195094 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.196221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.196257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.196270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.539665 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:31:24.064275774 +0000 UTC Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.871765 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.871945 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.872809 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.872868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:10 crc kubenswrapper[4705]: I0124 07:41:10.872888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:11 crc kubenswrapper[4705]: I0124 07:41:11.384733 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:11 crc kubenswrapper[4705]: I0124 07:41:11.539858 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:32:41.290801677 +0000 UTC Jan 24 07:41:11 crc kubenswrapper[4705]: I0124 07:41:11.642341 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:11 crc kubenswrapper[4705]: I0124 07:41:11.643107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:11 crc kubenswrapper[4705]: I0124 07:41:11.643145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:11 crc kubenswrapper[4705]: I0124 07:41:11.643157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:11 crc kubenswrapper[4705]: E0124 07:41:11.645164 4705 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 07:41:12 crc kubenswrapper[4705]: I0124 07:41:12.151408 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 24 07:41:12 crc kubenswrapper[4705]: I0124 07:41:12.151616 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:12 crc kubenswrapper[4705]: I0124 07:41:12.152672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:12 crc kubenswrapper[4705]: I0124 07:41:12.152720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:12 crc kubenswrapper[4705]: I0124 07:41:12.152730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:12 crc kubenswrapper[4705]: I0124 07:41:12.540980 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:02:31.125361888 +0000 UTC Jan 24 07:41:13 crc kubenswrapper[4705]: I0124 07:41:13.108964 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 24 07:41:13 crc kubenswrapper[4705]: I0124 07:41:13.109139 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:13 crc kubenswrapper[4705]: I0124 07:41:13.110271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:13 crc kubenswrapper[4705]: I0124 07:41:13.110303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:13 crc kubenswrapper[4705]: I0124 07:41:13.110315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:13 crc kubenswrapper[4705]: I0124 07:41:13.542142 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:17:56.760025256 +0000 UTC Jan 24 07:41:14 crc kubenswrapper[4705]: I0124 07:41:14.385482 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:41:14 crc kubenswrapper[4705]: I0124 07:41:14.385580 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:41:14 crc kubenswrapper[4705]: I0124 07:41:14.542796 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:42:27.77037071 +0000 UTC Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.296662 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.296813 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.297889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.297940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.297953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.300696 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.331207 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.335127 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.528845 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.543857 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:35:48.892102035 +0000 UTC Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.650992 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.651868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.651931 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.651953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:15 crc kubenswrapper[4705]: W0124 07:41:15.879189 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 24 07:41:15 crc kubenswrapper[4705]: I0124 07:41:15.879270 4705 trace.go:236] Trace[1151150293]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 07:41:05.878) (total time: 10000ms): Jan 24 07:41:15 crc kubenswrapper[4705]: Trace[1151150293]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:41:15.879) Jan 24 07:41:15 crc kubenswrapper[4705]: Trace[1151150293]: [10.000936612s] [10.000936612s] END Jan 24 07:41:15 crc kubenswrapper[4705]: E0124 07:41:15.879289 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.544315 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:36:45.351987983 +0000 UTC Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.572371 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.572462 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.576523 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.576587 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.653354 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.654919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.654957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:16 crc kubenswrapper[4705]: I0124 07:41:16.654984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:17 crc kubenswrapper[4705]: I0124 07:41:17.545284 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:07:01.234295791 +0000 UTC Jan 24 07:41:18 crc kubenswrapper[4705]: I0124 07:41:18.545922 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:05:48.524428852 +0000 UTC Jan 24 07:41:19 crc kubenswrapper[4705]: I0124 07:41:19.547192 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:10:01.856101387 +0000 UTC Jan 24 07:41:19 crc kubenswrapper[4705]: I0124 07:41:19.557333 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.199456 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.199706 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.201115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.201152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.201163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.205795 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.548228 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:53:49.367420046 +0000 UTC Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.849791 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.850360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.850381 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:20 crc kubenswrapper[4705]: I0124 07:41:20.850389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.548719 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:30:26.801899674 +0000 UTC Jan 24 07:41:21 crc kubenswrapper[4705]: E0124 07:41:21.566088 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.569168 4705 trace.go:236] Trace[855673685]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 07:41:09.676) (total time: 11892ms): Jan 24 07:41:21 crc kubenswrapper[4705]: Trace[855673685]: ---"Objects listed" error: 11892ms (07:41:21.569) Jan 24 07:41:21 crc kubenswrapper[4705]: Trace[855673685]: [11.892839717s] [11.892839717s] END Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.569194 4705 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 07:41:21 crc kubenswrapper[4705]: E0124 07:41:21.573760 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.574907 4705 trace.go:236] Trace[1798115516]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 07:41:09.151) (total time: 12423ms): Jan 24 07:41:21 crc kubenswrapper[4705]: Trace[1798115516]: ---"Objects listed" error: 12423ms (07:41:21.574) Jan 24 07:41:21 crc kubenswrapper[4705]: Trace[1798115516]: [12.423342819s] [12.423342819s] END Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.574931 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.574958 4705 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.575963 4705 trace.go:236] Trace[528691791]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 07:41:10.161) (total time: 11414ms): Jan 24 07:41:21 crc kubenswrapper[4705]: Trace[528691791]: ---"Objects listed" error: 11413ms (07:41:21.575) Jan 24 07:41:21 crc kubenswrapper[4705]: Trace[528691791]: [11.414043675s] [11.414043675s] END Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.576119 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.755148 4705 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.822781 4705 csr.go:261] certificate signing request csr-dn59p is approved, waiting to be issued Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.823810 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.823881 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.830091 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59150->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.830139 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59150->192.168.126.11:17697: read: connection reset by peer" Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.860899 4705 csr.go:257] certificate signing request csr-dn59p is issued Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.861533 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:21 crc kubenswrapper[4705]: I0124 07:41:21.886681 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.641263 4705 apiserver.go:52] "Watching apiserver" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.642349 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:34:16.504640742 +0000 UTC Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.653920 4705 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.655244 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.656403 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.656519 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.656587 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.656902 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.659226 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.659333 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.659351 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.661736 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.664168 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.664894 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.665021 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.665276 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.665496 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.667196 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.667145 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.673706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.797228 4705 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.797836 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.799052 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808580 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808655 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808799 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808851 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808881 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808902 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808946 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808966 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.808984 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809027 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809057 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809085 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809110 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809128 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809166 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809184 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809200 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809216 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809233 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809254 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809273 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809304 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809319 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809334 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809350 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809366 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809384 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809398 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809413 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809468 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809483 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809506 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809522 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809553 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809568 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809584 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809598 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809613 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809629 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809642 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809694 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809735 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809765 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809782 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809802 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809853 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809885 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809906 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809926 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809949 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809971 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.809992 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810014 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810033 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810048 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810063 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810078 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810103 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810120 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810135 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810156 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810171 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810186 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810223 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810238 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810252 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810268 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810288 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810317 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810287 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810370 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810391 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810415 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810432 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810448 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810466 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810482 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810510 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810527 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810548 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810565 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810597 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810622 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810645 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810667 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810710 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810731 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810785 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810858 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810881 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810897 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810913 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810930 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810947 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810965 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810983 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.810998 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811013 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811030 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811072 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811088 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811103 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811119 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811136 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811134 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811158 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811178 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811196 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811214 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811262 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811281 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811303 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811320 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811344 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811360 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811376 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811392 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811409 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811425 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811442 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811461 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811478 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811495 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811511 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811527 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811601 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811634 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811649 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811684 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811700 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811716 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811737 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811754 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811770 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811787 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.811804 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.814365 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.814504 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:41:23.314484502 +0000 UTC m=+22.034357790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.814734 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.814922 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.814965 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.815690 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.815902 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817099 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817152 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817172 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817195 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817217 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817244 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817264 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817282 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817316 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817335 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817353 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817370 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817390 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817406 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817423 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817442 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817469 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817485 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.817503 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818755 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818790 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818837 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818864 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818888 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818912 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818938 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818961 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.818989 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819013 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819038 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819062 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819084 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819107 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819133 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819161 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819186 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819212 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819234 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819257 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819282 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819304 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819336 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819360 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819385 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819410 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819435 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819469 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819522 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819548 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819573 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819598 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819665 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819699 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819748 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819772 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819796 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819848 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819877 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819905 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819954 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.819979 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820007 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820033 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820108 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820126 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820139 4705 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820152 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820167 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820180 4705 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820195 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.820208 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.823211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.823242 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.823436 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.824393 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.825244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.825379 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.825564 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.825619 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.825795 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.825812 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.826698 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.826793 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.826880 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.826905 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.826973 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.827099 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.827337 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.827728 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.828046 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.828071 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.828319 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.828649 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.829008 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.829130 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.829128 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.829644 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.829784 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.830107 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.830565 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.831101 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.831196 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.831443 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.831741 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.831956 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.832223 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.832305 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.832405 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.832665 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.832779 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.833071 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.833211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.833409 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.833637 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.833905 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.834008 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.834197 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.835029 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.829955 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.835336 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:23.335311333 +0000 UTC m=+22.055184621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.835431 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:22 crc kubenswrapper[4705]: E0124 07:41:22.835463 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:23.335456728 +0000 UTC m=+22.055330016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.835477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.835591 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.835809 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.835971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836100 4705 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836150 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836141 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836393 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836749 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.836899 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.839246 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.839295 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.839487 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.839662 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.840454 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.840968 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.842537 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.843071 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.843399 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.844174 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.844189 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.844480 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.844647 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.844911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.845074 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.845371 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.845620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.845662 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.844963 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.845961 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.845988 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.846219 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.846276 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.846301 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.846364 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.846251 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.846718 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.847092 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.847121 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.847209 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.851401 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:22 crc kubenswrapper[4705]: I0124 07:41:22.998936 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.000023 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:22.999802 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.000869 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.001358 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.001261 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.001386 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.002025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.002333 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.002592 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.003032 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.003251 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.003477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.003587 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.003802 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.004007 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.004632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.005006 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.005914 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.006473 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.006527 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.006937 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.006959 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.007159 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.007216 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.007490 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.007663 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.007835 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.008219 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.008625 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.008747 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.009198 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.009620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.009954 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.010345 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.010732 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.010737 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.010958 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.011290 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.014054 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.014404 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.036286 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.036637 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.036696 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.036756 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.037298 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.040694 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.040885 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.041480 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.041578 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042076 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042110 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042171 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042232 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042239 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042414 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042564 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042580 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042607 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042778 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042801 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042903 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.042989 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.001172 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043158 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043296 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043342 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043380 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-24 07:36:21 +0000 UTC, rotation deadline is 2026-11-23 10:20:29.056602047 +0000 UTC Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043339 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043399 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7274h39m6.013205221s for next certificate rotation Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043413 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.043780 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.044184 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.044657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.044716 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045054 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045151 4705 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045162 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045171 4705 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045181 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045190 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045199 4705 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045207 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.045216 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.046070 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.048066 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.048805 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.048893 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.048906 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.048934 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.048948 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.049016 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:23.548993161 +0000 UTC m=+22.268866519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.049967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050069 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050130 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050156 4705 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050171 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050183 4705 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050194 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050204 4705 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050216 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050227 4705 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050238 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050250 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050261 4705 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050273 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050287 4705 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050299 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050310 4705 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050321 4705 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050332 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050359 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050370 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050381 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050393 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050404 4705 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050415 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050427 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050439 4705 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050450 4705 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050461 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050474 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050486 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050497 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050516 4705 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050527 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050538 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050551 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050563 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050572 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050580 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050589 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050597 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050605 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050612 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050621 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050629 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050637 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050645 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050653 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050662 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050669 4705 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050697 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050706 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050714 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050766 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050776 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050786 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050796 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050804 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050811 4705 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050838 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050848 4705 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050860 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050868 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050876 4705 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050885 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050893 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050901 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050909 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050918 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050926 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050934 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050942 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050950 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050958 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050966 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050974 4705 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050982 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.050991 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051000 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051008 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051016 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051023 4705 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051031 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051039 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051082 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051091 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051101 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051111 4705 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051119 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051128 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051137 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051147 4705 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051156 4705 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051165 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051175 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051184 4705 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051192 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051257 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051266 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051274 4705 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051281 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051289 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051298 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051306 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051314 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051323 4705 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051331 4705 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051339 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051347 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051351 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051379 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051389 4705 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051398 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051405 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051446 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051454 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051464 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051473 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051480 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051488 4705 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051496 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051504 4705 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051512 4705 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051553 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051576 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051586 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051594 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051602 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051610 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051617 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051624 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051650 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051658 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051665 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051674 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051682 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051690 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051697 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051705 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051713 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051739 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051747 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051755 4705 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051781 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051789 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051796 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051803 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051811 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051844 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.051855 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.052680 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.053875 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.057309 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.057701 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.058376 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.060714 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.061284 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.063211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.063317 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.063347 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.063361 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.063433 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:23.563414102 +0000 UTC m=+22.283287460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.063676 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.063801 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.065300 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.065389 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.065622 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3" exitCode=255 Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.066198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3"} Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.069106 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.069220 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.069344 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.069437 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.070311 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.070407 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.070622 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.070695 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.071132 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.071237 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.071554 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.072323 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.079937 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.079942 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.080234 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.080529 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.080914 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.084669 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.089228 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.099900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.100244 4705 scope.go:117] "RemoveContainer" containerID="b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.100568 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.115048 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.124033 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.136439 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.150072 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.154746 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.154789 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.154800 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155033 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155077 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155094 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155117 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155132 4705 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155149 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155494 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155529 4705 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155539 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155549 4705 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155561 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155570 4705 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155579 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155588 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155601 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155610 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155620 4705 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155631 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155654 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155663 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155672 4705 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155680 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155691 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155700 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155708 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155719 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155728 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.155737 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.267102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.305941 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.362399 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.362517 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.362563 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.362694 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:41:24.362656377 +0000 UTC m=+23.082529675 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.362708 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.362756 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.362787 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:24.36277202 +0000 UTC m=+23.082645378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.363309 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:24.362799561 +0000 UTC m=+23.082672849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.403638 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.451685 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h9wbv"] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.452146 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-cni-binary-copy\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-k8s-cni-cncf-io\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565255 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-conf-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565283 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565311 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-cnibin\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565331 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-daemon-config\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565349 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-etc-kubernetes\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565377 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-os-release\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565398 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565420 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-cni-bin\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565440 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-cni-multus\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565479 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-socket-dir-parent\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565502 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-multus-certs\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqzg\" (UniqueName: \"kubernetes.io/projected/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-kube-api-access-4mqzg\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565541 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-cni-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565588 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-netns\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-kubelet\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565629 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-hostroot\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.565649 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-system-cni-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.565803 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.565841 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.565854 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.565898 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:24.565881957 +0000 UTC m=+23.285755245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.565977 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.565990 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.565999 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: E0124 07:41:23.566024 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:24.566015481 +0000 UTC m=+23.285888769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.573840 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.574053 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.574080 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.574210 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.574311 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.574359 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.574613 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gbn67"] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.575620 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.579450 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.580948 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.581743 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.582785 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.583368 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.584335 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.585582 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.586247 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.587282 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.587764 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.588676 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.589354 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.590197 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.590687 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.591575 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.592154 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.592702 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.593458 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.594123 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.594682 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.595564 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.596261 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.597409 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.598263 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.598876 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.599389 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.601471 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.603383 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.604482 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.605210 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.606208 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.607990 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.619395 4705 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.619531 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.621368 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.622024 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.622475 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.624334 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.624998 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.625901 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.626519 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.627559 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.628039 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.628629 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.629612 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.630557 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.631007 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.631971 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.632438 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.633600 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.634117 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.635004 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.635437 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.636039 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.637076 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.637518 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.638115 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.638387 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dxqp2"] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.638669 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-w9jkp"] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.638748 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.638936 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.638959 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-js42b"] Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.639127 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.640194 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.647547 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.647894 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.648197 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.648393 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.649320 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.649430 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.649512 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.652262 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.652649 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.648873 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:54:42.750998183 +0000 UTC Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.652924 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.653075 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.653376 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.658304 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.665398 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.665476 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670408 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjx8\" (UniqueName: \"kubernetes.io/projected/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-kube-api-access-bcjx8\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmq2\" (UniqueName: \"kubernetes.io/projected/2164d9f8-41b8-4380-b070-76e772189f1a-kube-api-access-mxmq2\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670493 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670510 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-env-overrides\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-netns\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670553 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-systemd-units\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-config\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b3b969-5164-4f10-8758-72b7e2f4b762-mcd-auth-proxy-config\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670628 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2164d9f8-41b8-4380-b070-76e772189f1a-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-system-cni-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670665 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-kubelet\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670683 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7b3b969-5164-4f10-8758-72b7e2f4b762-proxy-tls\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670699 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2164d9f8-41b8-4380-b070-76e772189f1a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-conf-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670747 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-ovn\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670767 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-cnibin\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-etc-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-daemon-config\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670850 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-os-release\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670871 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-cni-bin\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670889 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-cni-multus\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670909 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jvb\" (UniqueName: \"kubernetes.io/projected/a7b3b969-5164-4f10-8758-72b7e2f4b762-kube-api-access-q6jvb\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670943 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-log-socket\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670963 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqtq\" (UniqueName: \"kubernetes.io/projected/31854c6e-066f-4612-88b4-1e156b4770e9-kube-api-access-6rqtq\") pod \"node-resolver-w9jkp\" (UID: \"31854c6e-066f-4612-88b4-1e156b4770e9\") " pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.670978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/31854c6e-066f-4612-88b4-1e156b4770e9-hosts-file\") pod \"node-resolver-w9jkp\" (UID: \"31854c6e-066f-4612-88b4-1e156b4770e9\") " pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671017 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-socket-dir-parent\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqzg\" (UniqueName: \"kubernetes.io/projected/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-kube-api-access-4mqzg\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-system-cni-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-netd\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671218 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-conf-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-netns\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671284 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-cnibin\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671367 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-script-lib\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-system-cni-dir\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671471 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-cni-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671559 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-multus-certs\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-bin\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671587 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671623 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671677 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-cnibin\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.671914 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-daemon-config\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-hostroot\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672076 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-os-release\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672093 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-cni-dir\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-cni-bin\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672135 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-kubelet\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672162 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-var-lib-cni-multus\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672192 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-multus-certs\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672243 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-hostroot\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672259 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-multus-socket-dir-parent\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.672279 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-cni-binary-copy\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.676449 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-cni-binary-copy\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.676916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-k8s-cni-cncf-io\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.676949 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-kubelet\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.676965 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-var-lib-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.676988 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677005 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677020 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7b3b969-5164-4f10-8758-72b7e2f4b762-rootfs\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677029 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-host-run-k8s-cni-cncf-io\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677036 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-systemd\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677073 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-netns\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677093 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-os-release\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677183 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-etc-kubernetes\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-etc-kubernetes\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-slash\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677300 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-node-log\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.677326 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovn-node-metrics-cert\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.686010 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.697049 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqzg\" (UniqueName: \"kubernetes.io/projected/5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd-kube-api-access-4mqzg\") pod \"multus-h9wbv\" (UID: \"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\") " pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.742730 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.764487 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.775008 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798512 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/31854c6e-066f-4612-88b4-1e156b4770e9-hosts-file\") pod \"node-resolver-w9jkp\" (UID: \"31854c6e-066f-4612-88b4-1e156b4770e9\") " pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-netd\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798574 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-script-lib\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798590 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-system-cni-dir\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798624 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-cnibin\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798639 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-bin\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798653 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-kubelet\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798667 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-var-lib-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798681 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798711 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-systemd\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7b3b969-5164-4f10-8758-72b7e2f4b762-rootfs\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-os-release\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798775 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-netns\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798795 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-slash\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798807 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-node-log\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798836 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovn-node-metrics-cert\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798851 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmq2\" (UniqueName: \"kubernetes.io/projected/2164d9f8-41b8-4380-b070-76e772189f1a-kube-api-access-mxmq2\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798881 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-env-overrides\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798896 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcjx8\" (UniqueName: \"kubernetes.io/projected/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-kube-api-access-bcjx8\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798913 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-config\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b3b969-5164-4f10-8758-72b7e2f4b762-mcd-auth-proxy-config\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798949 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2164d9f8-41b8-4380-b070-76e772189f1a-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798965 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-systemd-units\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.798980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2164d9f8-41b8-4380-b070-76e772189f1a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799014 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-ovn\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7b3b969-5164-4f10-8758-72b7e2f4b762-proxy-tls\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799043 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-etc-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799071 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6jvb\" (UniqueName: \"kubernetes.io/projected/a7b3b969-5164-4f10-8758-72b7e2f4b762-kube-api-access-q6jvb\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-log-socket\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799103 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqtq\" (UniqueName: \"kubernetes.io/projected/31854c6e-066f-4612-88b4-1e156b4770e9-kube-api-access-6rqtq\") pod \"node-resolver-w9jkp\" (UID: \"31854c6e-066f-4612-88b4-1e156b4770e9\") " pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799231 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-slash\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799372 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/31854c6e-066f-4612-88b4-1e156b4770e9-hosts-file\") pod \"node-resolver-w9jkp\" (UID: \"31854c6e-066f-4612-88b4-1e156b4770e9\") " pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799414 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-netd\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.799423 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-node-log\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800379 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-script-lib\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800432 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-system-cni-dir\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800480 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-cnibin\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800499 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-bin\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800519 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-kubelet\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800541 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-var-lib-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800566 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800848 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800890 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-systemd\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800913 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7b3b969-5164-4f10-8758-72b7e2f4b762-rootfs\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800962 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2164d9f8-41b8-4380-b070-76e772189f1a-os-release\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.800983 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-netns\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.801634 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2164d9f8-41b8-4380-b070-76e772189f1a-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.801965 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.802501 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-config\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.802637 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-env-overrides\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.803056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b3b969-5164-4f10-8758-72b7e2f4b762-mcd-auth-proxy-config\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.803211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-etc-openvswitch\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.803487 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovn-node-metrics-cert\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.803562 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-log-socket\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.803593 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-systemd-units\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.803594 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2164d9f8-41b8-4380-b070-76e772189f1a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.803618 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-ovn\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.808488 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7b3b969-5164-4f10-8758-72b7e2f4b762-proxy-tls\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.813009 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9wbv" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.819805 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.825229 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmq2\" (UniqueName: \"kubernetes.io/projected/2164d9f8-41b8-4380-b070-76e772189f1a-kube-api-access-mxmq2\") pod \"multus-additional-cni-plugins-gbn67\" (UID: \"2164d9f8-41b8-4380-b070-76e772189f1a\") " pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.875013 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcjx8\" (UniqueName: \"kubernetes.io/projected/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-kube-api-access-bcjx8\") pod \"ovnkube-node-js42b\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.877227 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6jvb\" (UniqueName: \"kubernetes.io/projected/a7b3b969-5164-4f10-8758-72b7e2f4b762-kube-api-access-q6jvb\") pod \"machine-config-daemon-dxqp2\" (UID: \"a7b3b969-5164-4f10-8758-72b7e2f4b762\") " pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.888616 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.897671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqtq\" (UniqueName: \"kubernetes.io/projected/31854c6e-066f-4612-88b4-1e156b4770e9-kube-api-access-6rqtq\") pod \"node-resolver-w9jkp\" (UID: \"31854c6e-066f-4612-88b4-1e156b4770e9\") " pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.900120 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.919252 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.935194 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbn67" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.948323 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.962608 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.987539 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.990188 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:41:23 crc kubenswrapper[4705]: I0124 07:41:23.999373 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.013579 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.026363 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.048269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w9jkp" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.074187 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.081539 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.090332 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07"} Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.091294 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.094948 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerStarted","Data":"c6c734a6b2485eac6d4ab5e332d7c8ee5be903cb88ffa75e95c6ea8b63e90704"} Jan 24 07:41:24 crc kubenswrapper[4705]: W0124 07:41:24.101251 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31854c6e_066f_4612_88b4_1e156b4770e9.slice/crio-594f3a4656d0dfc2fee4159e791def4aebf8033233f94820821ce0d2b4f8c40f WatchSource:0}: Error finding container 594f3a4656d0dfc2fee4159e791def4aebf8033233f94820821ce0d2b4f8c40f: Status 404 returned error can't find the container with id 594f3a4656d0dfc2fee4159e791def4aebf8033233f94820821ce0d2b4f8c40f Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.105095 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca"} Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.105134 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1026332888e8c2b98764072d77c8c5897330568620c62f947fd1605e061386a1"} Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.107581 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.122998 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.135026 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"31a9574181b8993175dba36424ca56e664c9df9086e26be1fc15376f25cc33a0"} Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.138298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032"} Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.138505 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e5a313bdeee961fc5de76620e1e8c6d318372bed5bf4e9e103cfbe08a79f124f"} Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.227910 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.228162 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.263610 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.285525 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.414299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.414543 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:41:26.414522434 +0000 UTC m=+25.134395732 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.414581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.414610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.414738 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.414778 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:26.414768922 +0000 UTC m=+25.134642210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.415176 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.415210 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:26.415199615 +0000 UTC m=+25.135072903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.575056 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.575177 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.575556 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.575607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.575657 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.575721 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.663577 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:08:25.312412487 +0000 UTC Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.663953 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.664041 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664187 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664227 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664240 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664269 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664295 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664307 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:26.664287301 +0000 UTC m=+25.384160649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664309 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:24 crc kubenswrapper[4705]: E0124 07:41:24.664384 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:26.664361253 +0000 UTC m=+25.384234601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.695340 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:24 crc kubenswrapper[4705]: I0124 07:41:24.969481 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.045192 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.120738 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.143150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d"} Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.144148 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"5c6f6a8f48c413f8fdefb764ad0bcad429c0095b81cd62a9ca17f7ff1ddb8416"} Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.145143 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerStarted","Data":"5704ad5a9122feabb0d7cc0fda8c2d7547f6731ab9f6525b5583b94802456b8c"} Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.146148 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w9jkp" event={"ID":"31854c6e-066f-4612-88b4-1e156b4770e9","Type":"ContainerStarted","Data":"594f3a4656d0dfc2fee4159e791def4aebf8033233f94820821ce0d2b4f8c40f"} Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.147071 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.147574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"97f9fe96c786540f08aab134992e3a68ac584e6c9124efe1ee5f706a61a4563e"} Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.149238 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerStarted","Data":"b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0"} Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.174687 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.192617 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.207676 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.227337 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.248698 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.290110 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.306718 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.349976 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.362332 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.373574 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.396072 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.513496 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.646148 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.664426 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:00:12.268293362 +0000 UTC Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.664501 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.688949 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.699198 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.711483 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.727390 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.740773 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:25 crc kubenswrapper[4705]: I0124 07:41:25.767746 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:25Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.153536 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9" exitCode=0 Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.153676 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9"} Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.155068 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8"} Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.156231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerStarted","Data":"67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90"} Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.182331 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.195062 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.203743 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.255093 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.270155 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.285936 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.302137 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.315562 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.400350 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.417097 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.486942 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.487057 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.487162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.487181 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.487273 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.487343 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:30.487329404 +0000 UTC m=+29.207202692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.487515 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.487545 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:41:30.48752482 +0000 UTC m=+29.207398128 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.487569 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:30.487559861 +0000 UTC m=+29.207433249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.545264 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.570450 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.575675 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.575858 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.576233 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.576324 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.576389 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.576455 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.581388 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.596472 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.674188 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:24:28.463091315 +0000 UTC Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.678657 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.697473 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.697522 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697711 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697736 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697751 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697803 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:30.69778328 +0000 UTC m=+29.417656568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697878 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697892 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697903 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:26 crc kubenswrapper[4705]: E0124 07:41:26.697931 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:30.697923344 +0000 UTC m=+29.417796632 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.728611 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.791804 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.875071 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.907046 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.922112 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.940297 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.957745 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.975167 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:26 crc kubenswrapper[4705]: I0124 07:41:26.992303 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:26Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.034758 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.045943 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.056010 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.179857 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf"} Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.179899 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5"} Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.181029 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w9jkp" event={"ID":"31854c6e-066f-4612-88b4-1e156b4770e9","Type":"ContainerStarted","Data":"814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37"} Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.183791 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc"} Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.204030 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.223116 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.236726 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.249365 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.262153 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.272477 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.285163 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.298543 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.312053 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.327964 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.354734 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.368025 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.486604 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.539108 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.548882 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.561059 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.572476 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.585901 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.602984 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.618808 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.639031 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.656967 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.670221 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.674599 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:35:57.709685696 +0000 UTC Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.682480 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.693896 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.706616 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.715256 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.734004 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:27Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.974689 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.976146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.976179 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.976188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.976294 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.986610 4705 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.986940 4705 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.998311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.998357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.998369 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.998386 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:27 crc kubenswrapper[4705]: I0124 07:41:27.998397 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:27Z","lastTransitionTime":"2026-01-24T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.041451 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.045503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.045531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.045542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.045557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.045569 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.057752 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.060830 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.060862 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.060873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.060888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.060904 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.073106 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.076470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.076499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.076507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.076520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.076528 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.087615 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.091197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.091251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.091262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.091279 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.091290 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.101562 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.101716 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.103414 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.103454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.103464 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.103478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.103489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.205549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.205583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.205595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.205610 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.205621 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.230929 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.230974 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.232264 4705 generic.go:334] "Generic (PLEG): container finished" podID="2164d9f8-41b8-4380-b070-76e772189f1a" containerID="67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90" exitCode=0 Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.232807 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerDied","Data":"67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.246673 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.266940 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.304350 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.309019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.309067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.309078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.309093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.309102 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.340457 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.373323 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.408353 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.410788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.410812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.410908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.410924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.410932 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.430330 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.476432 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.491901 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.507654 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.518146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.518182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.518191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.518205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.518214 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.528341 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.541413 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.555858 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.569277 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.575074 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.575132 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.575200 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.575147 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.575360 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:28 crc kubenswrapper[4705]: E0124 07:41:28.575518 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.620472 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.620534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.620544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.620565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.620574 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.679060 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:06:31.232440443 +0000 UTC Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.730003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.730349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.730359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.730377 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.730389 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.833369 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.833422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.833433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.833448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.833461 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.935547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.935603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.935615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.935630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:28 crc kubenswrapper[4705]: I0124 07:41:28.935639 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:28Z","lastTransitionTime":"2026-01-24T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.039638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.039688 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.039700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.039716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.039727 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.142945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.143001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.143011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.143026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.143037 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.225301 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kzgk6"] Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.225726 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.229062 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.229110 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.229135 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.229380 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.240988 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.241039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.242644 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerStarted","Data":"080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.245114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.245151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.245163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.245180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.245193 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.255888 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.266022 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.276385 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.286851 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.297517 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.309947 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.320015 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.333729 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a5e20c-e474-4205-bb02-883ca9bb71f8-host\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.334074 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a5e20c-e474-4205-bb02-883ca9bb71f8-serviceca\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.334100 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc2zb\" (UniqueName: \"kubernetes.io/projected/45a5e20c-e474-4205-bb02-883ca9bb71f8-kube-api-access-fc2zb\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.337475 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.348318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.348569 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.348643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.348740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.348837 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.349000 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.360120 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.371312 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.382608 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.397685 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.412163 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.426958 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.435327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a5e20c-e474-4205-bb02-883ca9bb71f8-host\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.435411 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a5e20c-e474-4205-bb02-883ca9bb71f8-serviceca\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.435445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc2zb\" (UniqueName: \"kubernetes.io/projected/45a5e20c-e474-4205-bb02-883ca9bb71f8-kube-api-access-fc2zb\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.435467 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a5e20c-e474-4205-bb02-883ca9bb71f8-host\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.437006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a5e20c-e474-4205-bb02-883ca9bb71f8-serviceca\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.441503 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.451061 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.451100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.451120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.451143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.451155 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.456403 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.474763 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc2zb\" (UniqueName: \"kubernetes.io/projected/45a5e20c-e474-4205-bb02-883ca9bb71f8-kube-api-access-fc2zb\") pod \"node-ca-kzgk6\" (UID: \"45a5e20c-e474-4205-bb02-883ca9bb71f8\") " pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.477384 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.490215 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.504210 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.519186 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.532647 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.544021 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kzgk6" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.554676 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.554719 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.554727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.554743 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.554753 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.574087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: W0124 07:41:29.574584 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a5e20c_e474_4205_bb02_883ca9bb71f8.slice/crio-d09a69c670dd832f4cc3c8e537507b9b79037a690a1e20342c94cbb0c93b6651 WatchSource:0}: Error finding container d09a69c670dd832f4cc3c8e537507b9b79037a690a1e20342c94cbb0c93b6651: Status 404 returned error can't find the container with id d09a69c670dd832f4cc3c8e537507b9b79037a690a1e20342c94cbb0c93b6651 Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.587990 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.611230 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.627117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.643185 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.658987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.659069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.659108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.659128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.659142 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.659415 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.675713 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.679279 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:13:54.542592458 +0000 UTC Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.689276 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.761747 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.761810 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.761847 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.761873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.761891 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.864205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.864257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.864270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.864288 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.864301 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.967924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.967976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.967988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.968008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:29 crc kubenswrapper[4705]: I0124 07:41:29.968021 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:29Z","lastTransitionTime":"2026-01-24T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.070525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.070551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.070560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.070574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.070582 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.172668 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.172702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.172716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.172734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.172744 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.249169 4705 generic.go:334] "Generic (PLEG): container finished" podID="2164d9f8-41b8-4380-b070-76e772189f1a" containerID="080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3" exitCode=0 Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.249223 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerDied","Data":"080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.251257 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kzgk6" event={"ID":"45a5e20c-e474-4205-bb02-883ca9bb71f8","Type":"ContainerStarted","Data":"512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.251292 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kzgk6" event={"ID":"45a5e20c-e474-4205-bb02-883ca9bb71f8","Type":"ContainerStarted","Data":"d09a69c670dd832f4cc3c8e537507b9b79037a690a1e20342c94cbb0c93b6651"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.263343 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.275296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.275333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.275344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.275359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.275370 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.282701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.301604 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.317265 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.329639 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.340416 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.350300 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.361108 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.372999 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.377462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.377507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.377520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.377539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.377558 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.387030 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.400301 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.414917 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.429349 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.444981 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.465465 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.477135 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.480708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.480752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.480761 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.480778 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.480788 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.488356 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.500851 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.513347 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.530623 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.546269 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.546443 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:41:38.546411844 +0000 UTC m=+37.266285142 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.546517 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.546565 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.546681 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.546722 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.546740 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:38.546727513 +0000 UTC m=+37.266600801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.546759 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:38.546749464 +0000 UTC m=+37.266622752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.546974 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.560887 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.574850 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.574947 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.575061 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.575406 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.575457 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.575501 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.576359 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.582643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.582667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.582677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.582690 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.582700 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.588670 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.607182 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.617942 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.635884 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.649676 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.659784 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.675272 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.680329 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:51:32.045621022 +0000 UTC Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.685178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.685215 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.685225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.685240 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.685258 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.748288 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.748352 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748461 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748476 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748487 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748511 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748537 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:38.748523519 +0000 UTC m=+37.468396807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748547 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748562 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:30 crc kubenswrapper[4705]: E0124 07:41:30.748615 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:38.748598021 +0000 UTC m=+37.468471359 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.787437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.787482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.787493 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.787507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.787517 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.889721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.889764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.889773 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.889792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.889805 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.992191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.992243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.992256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.992274 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:30 crc kubenswrapper[4705]: I0124 07:41:30.992285 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:30Z","lastTransitionTime":"2026-01-24T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.095050 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.095092 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.095106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.095124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.095132 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.197222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.197261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.197270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.197284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.197295 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.256975 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.258401 4705 generic.go:334] "Generic (PLEG): container finished" podID="2164d9f8-41b8-4380-b070-76e772189f1a" containerID="07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d" exitCode=0 Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.258442 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerDied","Data":"07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.274226 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.283163 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.299981 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.300012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.300022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.300038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.300049 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.301980 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.315303 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.328346 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.340173 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.349452 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.360640 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.371444 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.384605 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.402238 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.402270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.402279 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.402292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.402301 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.406002 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.421151 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.435760 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.450615 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.464842 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.467968 4705 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.552812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.552871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.552883 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.552901 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.552911 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.614972 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.652755 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.660346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.660406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.660419 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.660438 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.660455 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.673869 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.680436 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:50:29.287496161 +0000 UTC Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.685883 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.697198 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.706612 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.719510 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.730899 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.741857 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.755378 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.763091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.763119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.763128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.763141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.763150 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.773562 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.786107 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.798867 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.811647 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.823490 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.865363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.865415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.865426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.865442 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.865455 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.967900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.967952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.967990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.968008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:31 crc kubenswrapper[4705]: I0124 07:41:31.968019 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:31Z","lastTransitionTime":"2026-01-24T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.070703 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.070737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.070745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.070760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.070768 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.173096 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.173143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.173167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.173189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.173203 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.271842 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.274397 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.274437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.274448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.274465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.274476 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.275066 4705 generic.go:334] "Generic (PLEG): container finished" podID="2164d9f8-41b8-4380-b070-76e772189f1a" containerID="2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e" exitCode=0 Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.275112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerDied","Data":"2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.286238 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.298789 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.316492 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.329887 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.344209 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.356711 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.366759 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.378520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.378543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.378552 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.378565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.378573 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.379532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.392246 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.406584 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.425470 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.437318 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.450992 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.463333 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.481081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.481118 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.481132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.481149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.481159 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.521261 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.534221 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.550248 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.566315 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.574815 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.574814 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:32 crc kubenswrapper[4705]: E0124 07:41:32.574951 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:32 crc kubenswrapper[4705]: E0124 07:41:32.575042 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.574849 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:32 crc kubenswrapper[4705]: E0124 07:41:32.575110 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.577629 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.583985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.584015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.584027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.584043 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.584055 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.592741 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.604478 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.616659 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.628409 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.638881 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.652233 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.672786 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.681166 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:30:56.491410866 +0000 UTC Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.683476 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.685984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.686003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.686011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.686024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.686032 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.696967 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.710162 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.723417 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:32Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.788188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.788232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.788243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.788261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.788273 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.890581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.890614 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.890622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.890633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.890642 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.998729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.998773 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.998812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.998857 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:32 crc kubenswrapper[4705]: I0124 07:41:32.998869 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:32Z","lastTransitionTime":"2026-01-24T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.007691 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.034367 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.049237 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.072102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.084532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.098986 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.100423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.100450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.100460 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.100474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.100484 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.112420 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.129558 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.142776 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.155356 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.169303 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.197695 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.294024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.294058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.294069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.294086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.294096 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.296973 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerStarted","Data":"be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.319846 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.334397 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.346499 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.358791 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.373936 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.389062 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.396647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.396684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.396698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.396724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.396741 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.405191 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.430801 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.443710 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.457764 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.479220 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.503087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.507883 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.508109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.508387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.508775 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.508947 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.517362 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.527713 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.549728 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.611275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.611755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.611835 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.611909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.611982 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.629388 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.642018 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.651962 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.660282 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:33Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.681433 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:29:12.826332673 +0000 UTC Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.714445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.714652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.714746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.714885 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.714982 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.817591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.817635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.817646 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.817664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.817677 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.919840 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.919866 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.919874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.919905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:33 crc kubenswrapper[4705]: I0124 07:41:33.919914 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:33Z","lastTransitionTime":"2026-01-24T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.022384 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.022479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.022532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.022562 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.022576 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.124596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.124628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.124643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.124659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.124668 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.226992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.227036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.227046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.227064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.227076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.304159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.304446 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.319641 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.330683 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.345860 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.345888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.345896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.345909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.345919 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.348182 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.365266 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.380602 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.396497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.414139 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.431139 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.435343 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.448447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.448506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.448521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.448536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.448546 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.449872 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.460375 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.480448 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.493052 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.505051 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.518660 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.535089 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.551077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.551111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.551121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.551136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.551146 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.555842 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.568959 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.574927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.574967 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.575012 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:34 crc kubenswrapper[4705]: E0124 07:41:34.575042 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:34 crc kubenswrapper[4705]: E0124 07:41:34.575096 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:34 crc kubenswrapper[4705]: E0124 07:41:34.575207 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.583949 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.595945 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.607142 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.618439 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.627505 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.643949 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.653046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.653112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.653121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.653135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.653147 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.657265 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.668159 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.677817 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.681877 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:39:55.49569082 +0000 UTC Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.686687 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.698693 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.708499 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.725769 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:34Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.755507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.755548 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.755562 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.755593 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.755603 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.858465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.858528 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.858542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.858564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.858577 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.960414 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.960455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.960467 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.960483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:34 crc kubenswrapper[4705]: I0124 07:41:34.960494 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:34Z","lastTransitionTime":"2026-01-24T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.062864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.062932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.062943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.062960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.062972 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.170141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.170458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.170576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.170641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.170749 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.273836 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.273875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.273884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.273898 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.273908 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.311104 4705 generic.go:334] "Generic (PLEG): container finished" podID="2164d9f8-41b8-4380-b070-76e772189f1a" containerID="be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539" exitCode=0 Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.311193 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerDied","Data":"be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.311276 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.311657 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.325367 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.339024 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.340942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.351078 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.376509 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.376545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.376554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.376567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.376578 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.415736 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.430808 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.442131 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.456913 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.476083 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.480968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.481011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.481023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.481040 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.481051 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.490687 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.507650 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.523971 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.539785 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.560665 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.572361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.647586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.648083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.648096 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.648114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.648124 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.669741 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.682394 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:37:43.402202576 +0000 UTC Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.686389 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.700982 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.714185 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.727029 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.742767 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.750122 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.750178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.750191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.750207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.750217 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.756497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.770947 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.792510 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.810490 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.822946 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.834352 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.845650 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.853004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.853047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.853060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.853077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.853089 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.862461 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.873036 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.890499 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:35Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.955652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.955682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.955692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.955704 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:35 crc kubenswrapper[4705]: I0124 07:41:35.955712 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:35Z","lastTransitionTime":"2026-01-24T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.057602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.057633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.057642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.057656 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.057665 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.160634 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.160704 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.160742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.160756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.160765 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.262679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.262727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.262736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.262753 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.262762 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.316961 4705 generic.go:334] "Generic (PLEG): container finished" podID="2164d9f8-41b8-4380-b070-76e772189f1a" containerID="21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805" exitCode=0 Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.317033 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerDied","Data":"21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.317109 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.329800 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.342428 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.358297 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.365611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.365653 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.365667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.365685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.365698 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.376082 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.389700 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.409297 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.422120 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.444598 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.454773 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.467320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.467378 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.467390 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.467409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.467429 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.471288 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.484526 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.496986 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.509534 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.520269 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.529506 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:36Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.569936 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.569977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.569990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.570012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.570023 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.575199 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.575270 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.575270 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:36 crc kubenswrapper[4705]: E0124 07:41:36.575358 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:36 crc kubenswrapper[4705]: E0124 07:41:36.575440 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:36 crc kubenswrapper[4705]: E0124 07:41:36.575524 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.679432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.679509 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.679524 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.679546 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.679564 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.682585 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:24:35.146392353 +0000 UTC Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.783088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.783173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.783188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.783204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.783217 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.886949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.886993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.887015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.887041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.887060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.989779 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.989838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.989852 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.989868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:36 crc kubenswrapper[4705]: I0124 07:41:36.989879 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:36Z","lastTransitionTime":"2026-01-24T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.092934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.093004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.093019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.093051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.093074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.270484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.270527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.270544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.270569 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.270585 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.328362 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.375216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.375278 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.375302 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.375326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.375346 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.479084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.479132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.479142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.479162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.479172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.580604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.580652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.580663 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.580678 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.580688 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.682891 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:00:16.562861 +0000 UTC Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.682947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.682978 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.682991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.683010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.683022 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.785458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.785518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.785540 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.785560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.785574 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.887693 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.887773 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.887806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.887888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.887913 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.923743 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7"] Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.924195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.926562 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.926562 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.941460 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:37Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.954064 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:37Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.968094 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:37Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.976093 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmpp\" (UniqueName: \"kubernetes.io/projected/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-kube-api-access-lsmpp\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.976310 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.976437 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.976590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.977913 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:37Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.990191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.990254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.990275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.990303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.990324 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:37Z","lastTransitionTime":"2026-01-24T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:37 crc kubenswrapper[4705]: I0124 07:41:37.992052 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:37Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.004922 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.018202 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.028750 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.040951 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.052789 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.069230 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.077225 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.077299 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmpp\" (UniqueName: \"kubernetes.io/projected/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-kube-api-access-lsmpp\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.077346 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.077372 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.078055 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.078490 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.086453 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.093870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.094077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.094202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.094323 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.094465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.097704 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.100989 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmpp\" (UniqueName: \"kubernetes.io/projected/a2d9ba7a-8541-4f85-b5e1-ae79e882761c-kube-api-access-lsmpp\") pod \"ovnkube-control-plane-749d76644c-qsww7\" (UID: \"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.113805 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.130753 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.140339 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.151102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.167614 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.167655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.167665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.167680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.167690 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.179678 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.182685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.182720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.182730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.182745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.182756 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.194426 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.197220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.197249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.197260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.197275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.197286 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.208339 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.211514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.211553 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.211563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.211578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.211589 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.224205 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.228766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.228798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.228808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.228836 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.228847 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.239408 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.246559 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.246713 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.248297 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.248337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.248350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.248367 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.248381 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: W0124 07:41:38.252769 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2d9ba7a_8541_4f85_b5e1_ae79e882761c.slice/crio-ddbe723db9d9a8ae577f49d8cd23ba3452291e1026fa2ff85b742ccbcfc87c48 WatchSource:0}: Error finding container ddbe723db9d9a8ae577f49d8cd23ba3452291e1026fa2ff85b742ccbcfc87c48: Status 404 returned error can't find the container with id ddbe723db9d9a8ae577f49d8cd23ba3452291e1026fa2ff85b742ccbcfc87c48 Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.337071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" event={"ID":"2164d9f8-41b8-4380-b070-76e772189f1a","Type":"ContainerStarted","Data":"ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.338390 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" event={"ID":"a2d9ba7a-8541-4f85-b5e1-ae79e882761c","Type":"ContainerStarted","Data":"ddbe723db9d9a8ae577f49d8cd23ba3452291e1026fa2ff85b742ccbcfc87c48"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.349054 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.350567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.350639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.350654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.350672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.350712 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.367876 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.381181 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.393997 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.405428 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.418506 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.429374 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.452265 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.452398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.452430 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.452437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.452454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.452462 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.482169 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.491388 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.501581 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.513208 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.526524 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.540040 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.549757 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.554138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.554351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.554415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.554487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.554564 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.562516 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:38Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.574743 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.574758 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.574804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.574843 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.574917 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.574971 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.583174 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.583358 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:41:54.583332517 +0000 UTC m=+53.303205815 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.583452 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.583490 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.583635 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.583665 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:54.583658278 +0000 UTC m=+53.303531566 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.583699 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.583716 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:54.583711289 +0000 UTC m=+53.303584577 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.657249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.657515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.657612 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.657747 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.658033 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.683389 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:17:13.433754446 +0000 UTC Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.760259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.760301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.760316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.760338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.760351 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.785026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.785388 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.785265 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.785681 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.785814 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.786025 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:54.78600297 +0000 UTC m=+53.505876278 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.785537 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.786267 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.786371 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:38 crc kubenswrapper[4705]: E0124 07:41:38.786501 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:54.786488345 +0000 UTC m=+53.506361643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.863868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.863907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.863915 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.863929 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.863937 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.967185 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.967226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.967239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.967254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:38 crc kubenswrapper[4705]: I0124 07:41:38.967262 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:38Z","lastTransitionTime":"2026-01-24T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.121691 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.121723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.121731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.121745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.121754 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:39Z","lastTransitionTime":"2026-01-24T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.229570 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.229607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.229619 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.229636 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.229647 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:39Z","lastTransitionTime":"2026-01-24T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.339668 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.339699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.339707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.339722 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.339730 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:39Z","lastTransitionTime":"2026-01-24T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.342240 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" event={"ID":"a2d9ba7a-8541-4f85-b5e1-ae79e882761c","Type":"ContainerStarted","Data":"fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.342277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" event={"ID":"a2d9ba7a-8541-4f85-b5e1-ae79e882761c","Type":"ContainerStarted","Data":"e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.348024 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mxnng"] Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.348490 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:39 crc kubenswrapper[4705]: E0124 07:41:39.348555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.352054 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.368502 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.379406 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.391506 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.405118 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.418196 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.428087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.444566 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.445574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.446005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.446047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.446057 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.446073 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.446084 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:39Z","lastTransitionTime":"2026-01-24T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.446477 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft2qm\" (UniqueName: \"kubernetes.io/projected/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-kube-api-access-ft2qm\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.463259 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.475396 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.492517 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.507374 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.529260 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.539200 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.547886 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.547974 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft2qm\" (UniqueName: \"kubernetes.io/projected/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-kube-api-access-ft2qm\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:39 crc kubenswrapper[4705]: E0124 07:41:39.548314 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:39 crc kubenswrapper[4705]: E0124 07:41:39.548366 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:40.048348288 +0000 UTC m=+38.768221576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.549297 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.567472 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft2qm\" (UniqueName: \"kubernetes.io/projected/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-kube-api-access-ft2qm\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.570847 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.586335 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.600786 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.650903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.650944 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.650954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.650970 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.650988 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:39Z","lastTransitionTime":"2026-01-24T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.665635 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.680837 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.684108 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:55:29.544692956 +0000 UTC Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.757638 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.759092 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.759129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.759139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.759152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.759163 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:39Z","lastTransitionTime":"2026-01-24T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.771105 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.782059 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.937659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.937691 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.937702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.937718 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.937731 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:39Z","lastTransitionTime":"2026-01-24T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.938352 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.950330 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.963672 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.974929 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:39 crc kubenswrapper[4705]: I0124 07:41:39.995474 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:39Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.006630 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.017605 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.029452 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.038847 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.039811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.039873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.039883 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.039902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.039913 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.048019 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.094855 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:40 crc kubenswrapper[4705]: E0124 07:41:40.095271 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:40 crc kubenswrapper[4705]: E0124 07:41:40.095491 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:41.09546474 +0000 UTC m=+39.815338028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.141716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.141999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.142094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.142226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.142309 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.245289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.245326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.245338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.245352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.245361 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.348112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.348150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.348159 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.348174 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.348185 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.450763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.450802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.450811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.450844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.450854 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.574753 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.574805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.575007 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:40 crc kubenswrapper[4705]: E0124 07:41:40.575020 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.575268 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:40 crc kubenswrapper[4705]: E0124 07:41:40.575456 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:40 crc kubenswrapper[4705]: E0124 07:41:40.575565 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:40 crc kubenswrapper[4705]: E0124 07:41:40.575705 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.685095 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:35:58.795996344 +0000 UTC Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.723071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.723141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.723154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.723181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.723198 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.825599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.825640 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.825650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.825664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.825673 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.928621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.928656 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.928665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.928682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:40 crc kubenswrapper[4705]: I0124 07:41:40.928692 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:40Z","lastTransitionTime":"2026-01-24T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.031466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.031503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.031512 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.031525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.031533 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.108330 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:41 crc kubenswrapper[4705]: E0124 07:41:41.108476 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:41 crc kubenswrapper[4705]: E0124 07:41:41.108520 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:43.108507444 +0000 UTC m=+41.828380732 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.133950 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.133989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.134001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.134019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.134030 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.236377 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.236406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.236414 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.236426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.236434 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.338952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.338984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.338993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.339005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.339014 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.351027 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/0.log" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.353531 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4" exitCode=1 Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.353563 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.354175 4705 scope.go:117] "RemoveContainer" containerID="29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.374586 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.386377 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.403448 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.438625 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.440937 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.440981 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.440995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.441016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.441027 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.453893 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.467403 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.478730 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.492065 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.505250 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.519586 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.533201 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.543672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.543701 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.543711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.543726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.543739 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.555271 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:41Z\\\",\\\"message\\\":\\\"y.go:140\\\\nI0124 07:41:41.037269 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037496 5942 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037614 5942 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:41.037638 5942 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037712 5942 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037812 5942 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.038084 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0124 07:41:41.038255 5942 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.569989 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.585030 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.605133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.618397 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.631814 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.647086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.647127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.647138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.647154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.647166 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.647330 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.661235 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.681241 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:41Z\\\",\\\"message\\\":\\\"y.go:140\\\\nI0124 07:41:41.037269 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037496 5942 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037614 5942 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:41.037638 5942 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037712 5942 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037812 5942 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.038084 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0124 07:41:41.038255 5942 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.685509 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:20:29.331213698 +0000 UTC Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.693880 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.708907 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.722411 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.732761 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.741609 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.752028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.752064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.752073 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.752086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.752097 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.758976 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.774538 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.815637 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.838898 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.854532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.854574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.854583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.854597 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.854606 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.863114 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.875809 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.888525 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.900170 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.911501 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.957850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.958209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.958228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.958255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:41 crc kubenswrapper[4705]: I0124 07:41:41.958276 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:41Z","lastTransitionTime":"2026-01-24T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.060195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.060229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.060239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.060253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.060262 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.162337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.162400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.162416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.162438 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.162453 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.264902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.264943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.264957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.264972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.264985 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.358088 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/0.log" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.360506 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.360623 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.366887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.366922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.366936 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.366953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.366966 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.374530 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.385732 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.398033 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.408749 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.420208 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.432363 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.444131 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.454397 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.469214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.469240 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.469249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.469261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.469273 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.474400 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.482665 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.499856 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:41Z\\\",\\\"message\\\":\\\"y.go:140\\\\nI0124 07:41:41.037269 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037496 5942 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037614 5942 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:41.037638 5942 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037712 5942 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037812 5942 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.038084 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0124 07:41:41.038255 5942 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.511356 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.529107 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.540243 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.553528 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.563181 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.572080 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.572139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.572268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.572282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.572133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.572290 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.575313 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.575330 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:42 crc kubenswrapper[4705]: E0124 07:41:42.575420 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.575705 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:42 crc kubenswrapper[4705]: E0124 07:41:42.575782 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.575865 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:42 crc kubenswrapper[4705]: E0124 07:41:42.575930 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:42 crc kubenswrapper[4705]: E0124 07:41:42.576041 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.674541 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.674597 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.674609 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.674627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.674641 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.686208 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:13:04.691683483 +0000 UTC Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.777565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.777600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.777609 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.777647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.777659 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.879911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.879937 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.879945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.879964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.879974 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.982399 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.982429 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.982439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.982455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:42 crc kubenswrapper[4705]: I0124 07:41:42.982465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:42Z","lastTransitionTime":"2026-01-24T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.085067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.085115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.085130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.085151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.085166 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.133199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:43 crc kubenswrapper[4705]: E0124 07:41:43.133322 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:43 crc kubenswrapper[4705]: E0124 07:41:43.133387 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:47.133371771 +0000 UTC m=+45.853245069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.187798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.187875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.187891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.187908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.187922 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.290638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.290711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.290731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.290759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.290783 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.366208 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/1.log" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.367184 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/0.log" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.372072 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af" exitCode=1 Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.372121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.372192 4705 scope.go:117] "RemoveContainer" containerID="29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.374112 4705 scope.go:117] "RemoveContainer" containerID="ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af" Jan 24 07:41:43 crc kubenswrapper[4705]: E0124 07:41:43.374469 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.394607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.394665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.394683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.394706 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.394720 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.400062 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.422171 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.440981 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.457523 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.471801 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.497329 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.497370 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.497382 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.497402 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.497418 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.499045 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.511113 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.525524 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.538674 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.553514 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.564957 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.585990 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d8e84926c026afe3cd30b32b8342ecc3a47d37b1cd2fdc0f0c4dfeb5473eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:41Z\\\",\\\"message\\\":\\\"y.go:140\\\\nI0124 07:41:41.037269 5942 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037496 5942 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037614 5942 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:41.037638 5942 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.037712 5942 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 07:41:41.037812 5942 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:41.038084 5942 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0124 07:41:41.038255 5942 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:42Z\\\",\\\"message\\\":\\\"vice openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0124 07:41:42.244705 6156 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nF0124 07:41:42.244594 6156 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.597442 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.599227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.599751 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.599781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.599797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.599807 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.606616 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.616786 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.628215 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.637987 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:43Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.686598 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:32:06.216754303 +0000 UTC Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.702243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.702292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.702302 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.702317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.702329 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.844135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.844461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.844525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.844592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.844655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.990729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.990755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.990764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.990777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:43 crc kubenswrapper[4705]: I0124 07:41:43.990786 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:43Z","lastTransitionTime":"2026-01-24T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.105055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.105089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.105098 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.105112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.105121 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:44Z","lastTransitionTime":"2026-01-24T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.207836 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.207879 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.207891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.207906 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.207915 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:44Z","lastTransitionTime":"2026-01-24T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.310220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.310270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.310283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.310302 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.310314 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:44Z","lastTransitionTime":"2026-01-24T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.376724 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/1.log" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.413306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.413353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.413363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.413383 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.413400 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:44Z","lastTransitionTime":"2026-01-24T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.543711 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.544610 4705 scope.go:117] "RemoveContainer" containerID="ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af" Jan 24 07:41:44 crc kubenswrapper[4705]: E0124 07:41:44.544767 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.545225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.545247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.545256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.545269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.545279 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:44Z","lastTransitionTime":"2026-01-24T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.832561 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:44 crc kubenswrapper[4705]: E0124 07:41:44.832666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.832722 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:44 crc kubenswrapper[4705]: E0124 07:41:44.832766 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.832816 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:44 crc kubenswrapper[4705]: E0124 07:41:44.832886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.832930 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:44 crc kubenswrapper[4705]: E0124 07:41:44.832968 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.833285 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:07:10.278517437 +0000 UTC Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.834375 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.834400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.834409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.834420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.834429 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:44Z","lastTransitionTime":"2026-01-24T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.843205 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:44Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.864217 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:44Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.892570 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:42Z\\\",\\\"message\\\":\\\"vice openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0124 07:41:42.244705 6156 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nF0124 07:41:42.244594 6156 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:44Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.903996 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:44Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.976694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.976732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.976740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.976754 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.976763 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:44Z","lastTransitionTime":"2026-01-24T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.980667 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:44Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:44 crc kubenswrapper[4705]: I0124 07:41:44.993369 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:44Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.003008 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.011372 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.028138 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.041070 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.055641 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.073059 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.078981 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.079031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.079041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.079057 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.079067 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.084670 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.096213 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.108109 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.121430 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.131831 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:45Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.181680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.181725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.181735 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.181752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.181763 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.284387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.284417 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.284427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.284441 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.284451 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.386573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.386609 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.386627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.386643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.386655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.489043 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.489086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.489097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.489113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.489124 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.591422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.591508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.591524 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.591542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.591579 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.693508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.693542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.693553 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.693568 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.693580 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.796579 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.797064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.797158 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.797261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.797342 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.833854 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:37:02.30957643 +0000 UTC Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.899713 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.899999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.900085 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.900176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:45 crc kubenswrapper[4705]: I0124 07:41:45.900254 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:45Z","lastTransitionTime":"2026-01-24T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.002931 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.002975 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.002990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.003007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.003018 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.105218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.105260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.105271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.105287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.105296 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.208140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.208178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.208190 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.208206 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.208217 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.310284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.310536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.310599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.310671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.310752 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.413517 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.413734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.413806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.413923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.414006 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.516810 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.517205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.517315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.517407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.517505 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.575400 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:46 crc kubenswrapper[4705]: E0124 07:41:46.575521 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.575764 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.575867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.575852 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:46 crc kubenswrapper[4705]: E0124 07:41:46.575957 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:46 crc kubenswrapper[4705]: E0124 07:41:46.576005 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:46 crc kubenswrapper[4705]: E0124 07:41:46.576039 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.620585 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.620849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.620934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.621015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.621112 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.723549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.723622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.723638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.723662 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.723681 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.826026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.826066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.826075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.826090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.826099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.834195 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:39:41.93241652 +0000 UTC Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.928882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.928909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.928917 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.928929 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:46 crc kubenswrapper[4705]: I0124 07:41:46.928938 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:46Z","lastTransitionTime":"2026-01-24T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.031426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.031488 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.031511 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.031583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.031607 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.134877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.134940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.134962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.135011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.135035 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.155043 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:47 crc kubenswrapper[4705]: E0124 07:41:47.155200 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:47 crc kubenswrapper[4705]: E0124 07:41:47.155260 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:41:55.155244677 +0000 UTC m=+53.875117985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.238765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.239199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.239349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.239506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.239639 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.342514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.342783 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.342889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.342974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.343055 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.445144 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.445189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.445200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.445213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.445221 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.548140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.548189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.548199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.548212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.548220 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.649876 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.649976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.650015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.650045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.650069 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.753498 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.753547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.753565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.753583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.753595 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.834367 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:54:39.148809252 +0000 UTC Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.856650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.856701 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.856717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.856739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.856754 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.959360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.959398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.959407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.959421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:47 crc kubenswrapper[4705]: I0124 07:41:47.959431 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:47Z","lastTransitionTime":"2026-01-24T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.061484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.061547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.061565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.061590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.061620 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.164501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.164540 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.164550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.164565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.164574 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.266086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.266144 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.266161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.266185 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.266204 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.369305 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.369352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.369365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.369382 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.369392 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.471814 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.471883 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.471893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.471906 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.471916 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.621610 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.621712 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.621868 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.621930 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.621997 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.622068 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.622249 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.622371 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.623097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.623118 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.623126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.623140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.623149 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.624041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.624069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.624079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.624092 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.624123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.638383 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:48Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.641719 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.641750 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.641760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.641775 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.641786 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.659442 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:48Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.662878 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.662932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.662944 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.662961 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.662972 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.677484 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:48Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.681368 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.681415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.681424 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.681439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.681448 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.693535 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:48Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.696876 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.696920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.696930 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.696947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.696959 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.709259 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:48Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:48 crc kubenswrapper[4705]: E0124 07:41:48.709424 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.724523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.724557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.724568 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.724581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.724591 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.826684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.826734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.826746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.826762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.826773 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.834993 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:30:30.751842949 +0000 UTC Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.929477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.929539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.929555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.929571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:48 crc kubenswrapper[4705]: I0124 07:41:48.929580 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:48Z","lastTransitionTime":"2026-01-24T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.032191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.032247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.032258 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.032273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.032282 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.134570 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.134613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.134625 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.134641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.134653 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.236651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.236687 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.236697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.236712 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.236723 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.338806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.338872 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.338881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.338894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.338903 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.440603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.440637 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.440647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.440660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.440676 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.543397 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.543437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.543449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.543468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.543478 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.645591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.645633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.645643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.645657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.645667 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.747673 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.747716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.747728 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.747744 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.747754 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.835262 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:52:24.697305262 +0000 UTC Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.850120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.850156 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.850166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.850186 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.850197 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.952574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.952612 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.952623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.952650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:49 crc kubenswrapper[4705]: I0124 07:41:49.952662 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:49Z","lastTransitionTime":"2026-01-24T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.056193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.056243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.056256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.056276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.056289 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.158890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.158923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.158932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.158946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.158955 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.262159 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.262189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.262197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.262213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.262224 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.364684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.364715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.364725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.364739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.364750 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.467765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.467811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.467840 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.467856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.467868 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.570212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.570257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.570267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.570284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.570294 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.574803 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.575009 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:50 crc kubenswrapper[4705]: E0124 07:41:50.575076 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.575103 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.575121 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:50 crc kubenswrapper[4705]: E0124 07:41:50.575230 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:50 crc kubenswrapper[4705]: E0124 07:41:50.575393 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:50 crc kubenswrapper[4705]: E0124 07:41:50.575502 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.672524 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.672571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.672585 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.672601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.672615 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.775084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.775174 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.775195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.775226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.775247 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.835361 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:43:49.929994647 +0000 UTC Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.877338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.877381 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.877394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.877410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.877419 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.980499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.980551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.980561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.980583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:50 crc kubenswrapper[4705]: I0124 07:41:50.980596 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:50Z","lastTransitionTime":"2026-01-24T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.083386 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.083427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.083436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.083451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.083461 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.186110 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.186676 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.186778 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.186897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.187038 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.289708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.289756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.289766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.289779 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.289787 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.396378 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.396455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.396471 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.396496 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.396509 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.498336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.498396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.498409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.498425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.498436 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.589901 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.600297 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.600345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.600353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.600367 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.600377 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.608445 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.622370 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.640277 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.653214 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.675836 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.688647 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.701899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.701958 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.701977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.702001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.702019 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.711959 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:42Z\\\",\\\"message\\\":\\\"vice openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0124 07:41:42.244705 6156 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nF0124 07:41:42.244594 6156 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.723921 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.737223 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.747990 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.761199 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.772557 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.781554 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.796272 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.804845 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.804882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.804892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.804908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.804920 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.812247 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.826206 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:51Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.836360 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:51:28.922426247 +0000 UTC Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.907310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.907360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.907380 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.907403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:51 crc kubenswrapper[4705]: I0124 07:41:51.907464 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:51Z","lastTransitionTime":"2026-01-24T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.009787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.009850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.009862 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.009877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.009888 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.112059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.112104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.112116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.112132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.112142 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.214366 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.214422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.214437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.214458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.214473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.316886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.316933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.316944 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.316959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.316969 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.419435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.419700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.419770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.419874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.419950 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.522599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.522638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.522651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.522666 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.522678 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.575695 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:52 crc kubenswrapper[4705]: E0124 07:41:52.576135 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.575729 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:52 crc kubenswrapper[4705]: E0124 07:41:52.576387 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.575711 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:52 crc kubenswrapper[4705]: E0124 07:41:52.576610 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.575907 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:52 crc kubenswrapper[4705]: E0124 07:41:52.576862 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.625142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.625191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.625205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.625225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.625238 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.727348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.727382 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.727392 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.727405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.727414 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.830320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.830363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.830376 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.830394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.830406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.836747 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:21:03.069124149 +0000 UTC Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.932199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.932450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.932659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.932848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:52 crc kubenswrapper[4705]: I0124 07:41:52.933033 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:52Z","lastTransitionTime":"2026-01-24T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.035880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.036134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.036239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.036520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.036611 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.138440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.138692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.138759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.138848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.138933 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.241573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.241618 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.241628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.241647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.241659 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.343555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.343871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.343990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.344075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.344350 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.447282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.447321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.447329 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.447349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.447358 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.548868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.549136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.549212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.549292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.549354 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.652409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.652470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.652483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.652498 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.652509 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.755060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.755097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.755106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.755121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.755130 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.837993 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:37:59.983925024 +0000 UTC Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.857770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.857851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.857867 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.857888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.857902 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.960147 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.960180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.960189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.960205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:53 crc kubenswrapper[4705]: I0124 07:41:53.960216 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:53Z","lastTransitionTime":"2026-01-24T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.064047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.064125 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.064330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.064350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.064361 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.166967 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.167243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.167336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.167415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.167493 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.230033 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.241318 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.250420 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.264103 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.270028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.270086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.270104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.270127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.270145 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.280898 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.305129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.319958 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.336297 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.352919 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.367077 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.372972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.373002 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.373010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.373022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.373031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.380587 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.397260 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.412603 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.437776 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:42Z\\\",\\\"message\\\":\\\"vice openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0124 07:41:42.244705 6156 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nF0124 07:41:42.244594 6156 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.449570 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.462155 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.475677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.475731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.475740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.475755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.475764 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.476507 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.487019 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.497922 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:54Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.575558 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.576029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.575719 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.576756 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.575676 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.575760 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.577087 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.577239 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.577617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.577654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.577667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.577684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.577702 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.680810 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.680947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.680992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.681029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.681059 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.681555 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.681762 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:42:26.681730107 +0000 UTC m=+85.401603435 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.682052 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.682242 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.682344 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:42:26.682317716 +0000 UTC m=+85.402191074 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.682257 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.682533 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.683025 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:42:26.682988327 +0000 UTC m=+85.402861715 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.784700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.785051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.785177 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.785270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.785361 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.839162 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:05:20.638553077 +0000 UTC Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.883894 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.883971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884166 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884193 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884208 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884222 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884229 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884241 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884308 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:42:26.884285996 +0000 UTC m=+85.604159374 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:54 crc kubenswrapper[4705]: E0124 07:41:54.884338 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:42:26.884324467 +0000 UTC m=+85.604197895 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.889103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.889172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.889197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.889226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.889245 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.991557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.991849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.991946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.992034 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:54 crc kubenswrapper[4705]: I0124 07:41:54.992116 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:54Z","lastTransitionTime":"2026-01-24T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.096577 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.097374 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.097550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.097688 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.097813 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.187103 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:55 crc kubenswrapper[4705]: E0124 07:41:55.187555 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:55 crc kubenswrapper[4705]: E0124 07:41:55.187729 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:42:11.187706702 +0000 UTC m=+69.907580000 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.200121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.200157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.200166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.200181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.200190 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.302158 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.302197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.302208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.302224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.302236 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.404535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.404590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.404613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.404639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.404661 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.506871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.506913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.506922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.506938 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.506947 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.609600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.609644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.609655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.609671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.609680 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.711723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.711755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.711763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.711774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.711785 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.814039 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.814077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.814086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.814102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.814113 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.839479 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:37:41.732450149 +0000 UTC Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.916536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.916571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.916583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.916598 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:55 crc kubenswrapper[4705]: I0124 07:41:55.916610 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:55Z","lastTransitionTime":"2026-01-24T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.018422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.018455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.018463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.018476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.018485 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.121848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.121892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.121904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.121924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.121936 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.223905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.224142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.224222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.224406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.224548 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.326941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.327198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.327291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.327364 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.327426 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.428702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.428730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.428739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.428752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.428762 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.548934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.548976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.548985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.549001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.549013 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.575625 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.575673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.575722 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:56 crc kubenswrapper[4705]: E0124 07:41:56.575748 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.575734 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:56 crc kubenswrapper[4705]: E0124 07:41:56.575911 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:56 crc kubenswrapper[4705]: E0124 07:41:56.575996 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:56 crc kubenswrapper[4705]: E0124 07:41:56.576094 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.651075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.651112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.651123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.651152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.651162 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.753300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.753334 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.753343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.753359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.753369 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.840583 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:16:28.004558993 +0000 UTC Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.855590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.855631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.855642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.855659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.855670 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.958752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.959075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.959214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.959355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:56 crc kubenswrapper[4705]: I0124 07:41:56.959453 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:56Z","lastTransitionTime":"2026-01-24T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.061961 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.062010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.062025 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.062045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.062060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.164506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.164542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.164551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.164566 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.164593 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.267567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.267599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.267606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.267619 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.267627 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.382426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.382480 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.382499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.382523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.382540 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.486078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.486169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.486193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.486227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.486246 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.576990 4705 scope.go:117] "RemoveContainer" containerID="ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.589207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.589245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.589266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.589283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.589296 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.691715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.691761 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.691776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.691794 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.691806 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.794459 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.794490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.794499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.794512 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.794521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.841746 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:51:25.866532163 +0000 UTC Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.897634 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.897683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.897698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.897717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:57 crc kubenswrapper[4705]: I0124 07:41:57.897728 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:57Z","lastTransitionTime":"2026-01-24T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.001267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.001335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.001349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.001379 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.001403 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.104361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.104417 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.104433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.104455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.104473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.206854 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.206894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.206905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.206921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.206932 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.309550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.309595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.309604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.309619 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.309628 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.411653 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.411696 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.411706 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.411723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.411734 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.513167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.513217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.513229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.513246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.513258 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.574891 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.574961 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.574902 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:41:58 crc kubenswrapper[4705]: E0124 07:41:58.575019 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:41:58 crc kubenswrapper[4705]: E0124 07:41:58.575092 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.574907 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:41:58 crc kubenswrapper[4705]: E0124 07:41:58.575408 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:41:58 crc kubenswrapper[4705]: E0124 07:41:58.575160 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.615625 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.615696 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.615722 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.615757 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.615783 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.718180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.718232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.718256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.718277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.718290 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.820335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.820369 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.820379 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.820392 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.820403 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.842964 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:38:26.2354524 +0000 UTC Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.922633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.922669 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.922678 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.922693 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.922702 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.997474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.997524 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.997533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.997547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:58 crc kubenswrapper[4705]: I0124 07:41:58.997557 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:58Z","lastTransitionTime":"2026-01-24T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: E0124 07:41:59.009555 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.012918 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.012959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.012967 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.012982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.012990 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: E0124 07:41:59.025214 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.029444 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.029506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.029521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.029537 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.029547 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: E0124 07:41:59.043686 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.047526 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.047567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.047580 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.047598 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.047612 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: E0124 07:41:59.062300 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.066777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.066804 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.066813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.066861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.066873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: E0124 07:41:59.079532 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: E0124 07:41:59.079653 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.080902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.080932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.080941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.080953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.080961 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.277865 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.277905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.277920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.277936 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.277948 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.384981 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.385024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.385091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.385119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.385133 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.479072 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/1.log" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.480747 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54"} Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.481774 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.502622 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.514637 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.535566 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:42Z\\\",\\\"message\\\":\\\"vice openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0124 07:41:42.244705 6156 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nF0124 07:41:42.244594 6156 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.549261 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.562437 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.575453 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.587290 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.600837 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.611028 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.623723 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.637443 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.653655 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.675722 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.690525 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.702462 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.706410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.706651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.706662 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.706678 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.706688 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.716237 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.729621 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.740210 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:41:59Z is after 2025-08-24T17:21:41Z" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.810072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.810105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.810114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.810128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.810137 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.843421 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:43:15.641887086 +0000 UTC Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.912889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.912960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.912972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.912992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:41:59 crc kubenswrapper[4705]: I0124 07:41:59.913006 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:41:59Z","lastTransitionTime":"2026-01-24T07:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.016231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.016321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.016333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.016353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.016366 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.119250 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.119300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.119310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.119325 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.119333 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.222355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.222400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.222413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.222434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.222447 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.325081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.325131 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.325145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.325160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.325171 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.427390 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.427436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.427448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.427462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.427473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.485616 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/2.log" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.486342 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/1.log" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.490115 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54" exitCode=1 Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.490150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.490191 4705 scope.go:117] "RemoveContainer" containerID="ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.490939 4705 scope.go:117] "RemoveContainer" containerID="4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54" Jan 24 07:42:00 crc kubenswrapper[4705]: E0124 07:42:00.491097 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.506497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.523195 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.534612 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.534675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.534696 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.534723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.534742 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.542629 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.554209 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.564870 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.575530 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.575621 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:00 crc kubenswrapper[4705]: E0124 07:42:00.575677 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.575700 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.575728 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:00 crc kubenswrapper[4705]: E0124 07:42:00.575933 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:00 crc kubenswrapper[4705]: E0124 07:42:00.575984 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:00 crc kubenswrapper[4705]: E0124 07:42:00.576021 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.577676 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.595171 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.611525 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.633059 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.636675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.636701 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.636708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.636721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.636729 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.645630 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.657840 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.668792 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.679679 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.689301 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.703394 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.713016 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.732943 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed6dda2c48792cf899c33c03c92f41d6f866a0e37da51d7caa78a063e14df5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:42Z\\\",\\\"message\\\":\\\"vice openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0124 07:41:42.244705 6156 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nF0124 07:41:42.244594 6156 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.738296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.738328 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.738337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.738352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.738361 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.744683 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:00Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.841121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.841194 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.841211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.841236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.841254 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.844227 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:23:44.581191396 +0000 UTC Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.943525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.943576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.943594 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.943618 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:00 crc kubenswrapper[4705]: I0124 07:42:00.943634 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:00Z","lastTransitionTime":"2026-01-24T07:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.045872 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.045946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.045968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.045986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.046000 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.147904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.147943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.147954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.147972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.147984 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.250538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.250573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.250581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.250596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.250606 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.352586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.352627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.352635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.352650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.352662 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.455232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.455269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.455277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.455293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.455304 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.495089 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/2.log" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.498689 4705 scope.go:117] "RemoveContainer" containerID="4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54" Jan 24 07:42:01 crc kubenswrapper[4705]: E0124 07:42:01.498946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.512624 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.528509 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.547046 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.557957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.558004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.558014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.558032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.558041 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.564084 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.575695 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.598556 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.611799 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.626321 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.638968 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.654802 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.659986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.660027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.660037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.660052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.660062 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.667014 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.688549 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.699686 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.708786 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.720025 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.733497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.746404 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.757244 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.764074 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.764114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.764133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.764151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.764162 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.771633 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.786959 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.797062 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.819354 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.831371 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.842257 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.844673 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:32:00.874372687 +0000 UTC Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.854801 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.866908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.866970 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.866987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.867008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.867021 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.867816 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.880792 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.895615 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.908016 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.919751 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.931865 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.945166 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.954989 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.969555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.969596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.969605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.969618 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.969627 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:01Z","lastTransitionTime":"2026-01-24T07:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.976390 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:01 crc kubenswrapper[4705]: I0124 07:42:01.989952 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:01Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.004209 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:02Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.072358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.072398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.072429 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.072447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.072456 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.174740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.174790 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.174803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.174840 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.174853 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.276762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.276798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.276806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.276833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.276842 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.378748 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.378813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.378897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.378927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.378949 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.480977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.481019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.481031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.481045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.481057 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.575280 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.575338 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:02 crc kubenswrapper[4705]: E0124 07:42:02.575403 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:02 crc kubenswrapper[4705]: E0124 07:42:02.575479 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.575285 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:02 crc kubenswrapper[4705]: E0124 07:42:02.575570 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.575951 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:02 crc kubenswrapper[4705]: E0124 07:42:02.576021 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.583429 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.583457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.583469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.583487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.583498 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.686373 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.686422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.686433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.686456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.686468 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.790089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.790127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.790136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.790150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.790159 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.846136 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:43:10.001368241 +0000 UTC Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.893396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.893429 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.893439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.893453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.893462 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.996114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.996168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.996180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.996198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:02 crc kubenswrapper[4705]: I0124 07:42:02.996210 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:02Z","lastTransitionTime":"2026-01-24T07:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.104026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.104075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.104087 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.104109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.104123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.206282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.206314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.206323 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.206336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.206344 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.308896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.309158 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.309267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.309353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.309426 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.411977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.412189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.412290 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.412367 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.412423 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.513782 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.514078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.514161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.514245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.514310 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.616807 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.617097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.617172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.617271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.617351 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.719792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.719853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.719867 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.719884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.719896 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.822078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.822114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.822122 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.822138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.822148 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.846748 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:03:40.327711451 +0000 UTC Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.924181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.924235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.924247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.924260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:03 crc kubenswrapper[4705]: I0124 07:42:03.924269 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:03Z","lastTransitionTime":"2026-01-24T07:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.027245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.027283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.027293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.027316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.027331 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.129314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.129839 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.129920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.130051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.130145 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.232764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.232795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.232803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.232833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.232843 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.335889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.336350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.336478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.336604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.336717 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.439103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.439139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.439149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.439170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.439183 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.541627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.541907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.541985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.542100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.542175 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.575481 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.575548 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.575505 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.575481 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:04 crc kubenswrapper[4705]: E0124 07:42:04.575629 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:04 crc kubenswrapper[4705]: E0124 07:42:04.575707 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:04 crc kubenswrapper[4705]: E0124 07:42:04.575762 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:04 crc kubenswrapper[4705]: E0124 07:42:04.575849 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.644491 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.644530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.644544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.644560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.644571 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.746331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.746360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.746369 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.746382 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.746391 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.847269 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:28:48.209433401 +0000 UTC Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.848786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.848813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.848843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.848861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.848870 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.951321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.951369 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.951380 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.951394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:04 crc kubenswrapper[4705]: I0124 07:42:04.951413 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:04Z","lastTransitionTime":"2026-01-24T07:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.052968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.053010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.053019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.053033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.053042 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.155440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.155505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.155514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.155530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.155540 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.257814 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.257866 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.257880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.257895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.257906 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.359798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.359869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.359883 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.359900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.359911 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.462000 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.462301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.462400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.462468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.462557 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.565150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.565178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.565187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.565200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.565210 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.667130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.667198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.667227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.667251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.667267 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.769385 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.769737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.769962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.770129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.770290 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.847895 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:57:27.413630817 +0000 UTC Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.872716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.873048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.873133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.873224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.873302 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.976724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.977023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.977145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.977238 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:05 crc kubenswrapper[4705]: I0124 07:42:05.977322 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:05Z","lastTransitionTime":"2026-01-24T07:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.079239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.079275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.079285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.079299 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.079309 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.181528 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.181575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.181587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.181601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.181613 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.283490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.283533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.283545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.283563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.283576 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.386088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.386130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.386139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.386156 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.386166 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.488603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.488856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.488919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.488981 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.489060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.575536 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.575787 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.575561 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.575536 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:06 crc kubenswrapper[4705]: E0124 07:42:06.575887 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:06 crc kubenswrapper[4705]: E0124 07:42:06.576009 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:06 crc kubenswrapper[4705]: E0124 07:42:06.576072 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:06 crc kubenswrapper[4705]: E0124 07:42:06.576111 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.591456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.591507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.591525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.591544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.591557 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.694021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.694062 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.694071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.694087 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.694097 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.796250 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.796282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.796291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.796304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.796314 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.849705 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:08:21.021218378 +0000 UTC Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.899251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.899492 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.899560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.899635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:06 crc kubenswrapper[4705]: I0124 07:42:06.899697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:06Z","lastTransitionTime":"2026-01-24T07:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.002538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.002576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.002585 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.002606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.002618 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.105214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.105255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.105264 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.105280 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.105293 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.208003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.208040 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.208049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.208065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.208079 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.310132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.310214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.310228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.310247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.310263 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.412292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.412333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.412346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.412365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.412379 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.515233 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.515294 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.515303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.515319 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.515330 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.618371 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.618424 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.618434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.618450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.618461 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.720956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.720987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.720997 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.721013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.721023 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.823494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.823534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.823548 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.823564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.823580 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.851290 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:06:47.464236727 +0000 UTC Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.925759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.925801 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.925812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.925849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:07 crc kubenswrapper[4705]: I0124 07:42:07.925861 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:07Z","lastTransitionTime":"2026-01-24T07:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.028034 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.028072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.028081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.028095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.028105 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.129966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.130014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.130024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.130042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.130053 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.232163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.232244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.232262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.232284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.232299 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.334996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.335054 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.335066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.335080 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.335090 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.437472 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.437716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.437960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.438063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.438145 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.540531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.540568 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.540578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.540595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.540607 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.575521 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.575586 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:08 crc kubenswrapper[4705]: E0124 07:42:08.575619 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:08 crc kubenswrapper[4705]: E0124 07:42:08.575706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.575780 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.575817 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:08 crc kubenswrapper[4705]: E0124 07:42:08.575923 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:08 crc kubenswrapper[4705]: E0124 07:42:08.576000 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.642502 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.642536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.642546 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.642558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.642570 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.744688 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.744747 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.744759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.744776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.744787 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.847080 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.847119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.847132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.847149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.847162 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.852453 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:44:11.257080159 +0000 UTC Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.950118 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.950192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.950206 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.950234 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:08 crc kubenswrapper[4705]: I0124 07:42:08.950257 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:08Z","lastTransitionTime":"2026-01-24T07:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.052492 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.052530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.052542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.052559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.052572 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.149729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.150085 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.150178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.150271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.150359 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: E0124 07:42:09.173049 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.180013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.180061 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.180072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.180090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.180103 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: E0124 07:42:09.194101 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.199192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.199265 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.199281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.199299 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.199309 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: E0124 07:42:09.218213 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.222397 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.222466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.222484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.222512 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.222535 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: E0124 07:42:09.239956 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.244229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.244411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.244518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.244667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.244958 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: E0124 07:42:09.258866 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:09 crc kubenswrapper[4705]: E0124 07:42:09.259103 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.261089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.261134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.261146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.261166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.261182 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.363565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.363603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.363614 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.363628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.363637 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.465644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.465712 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.465727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.465743 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.465773 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.568533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.568578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.568590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.568607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.568619 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.670991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.671028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.671050 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.671064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.671077 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.774124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.774163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.774171 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.774189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.774199 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.853066 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:35:58.671814816 +0000 UTC Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.876927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.876983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.876998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.877016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.877031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.979584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.979623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.979636 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.979653 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:09 crc kubenswrapper[4705]: I0124 07:42:09.979663 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:09Z","lastTransitionTime":"2026-01-24T07:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.081666 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.081708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.081719 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.081734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.081742 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.185365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.185423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.185436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.185457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.185470 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.287421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.287458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.287466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.287481 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.287491 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.389586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.389633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.389646 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.389666 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.389678 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.492244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.492275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.492284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.492297 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.492305 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.575299 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.575327 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.575352 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.575387 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:10 crc kubenswrapper[4705]: E0124 07:42:10.575447 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:10 crc kubenswrapper[4705]: E0124 07:42:10.575533 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:10 crc kubenswrapper[4705]: E0124 07:42:10.575599 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:10 crc kubenswrapper[4705]: E0124 07:42:10.575647 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.595520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.595555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.595565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.595582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.595596 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.698900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.698931 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.698943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.698960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.698971 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.800599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.800643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.800652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.800668 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.800678 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.854022 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:52:43.568895411 +0000 UTC Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.903315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.903358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.903373 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.903390 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:10 crc kubenswrapper[4705]: I0124 07:42:10.903406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:10Z","lastTransitionTime":"2026-01-24T07:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.006997 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.007035 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.007048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.007062 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.007073 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.110001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.110048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.110062 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.110079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.110090 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.203038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:11 crc kubenswrapper[4705]: E0124 07:42:11.203276 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:42:11 crc kubenswrapper[4705]: E0124 07:42:11.203390 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:42:43.203367924 +0000 UTC m=+101.923241212 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.212666 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.212727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.212747 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.212772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.212792 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.314853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.314892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.314902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.314916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.314926 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.417511 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.417545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.417554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.417567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.417576 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.520427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.520473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.520491 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.520524 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.520537 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.589392 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.597481 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.607502 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.620573 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.623949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.623988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.623999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.624037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.624051 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.632937 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.645391 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.654628 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.663149 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.674986 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.685967 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.699070 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.716397 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.727395 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.727427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.727437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.727453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.727465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.730258 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.744468 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.766296 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.782505 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.792484 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.811677 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.829898 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.829925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.829934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.829949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.829958 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.843807 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.854973 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:24:50.437673374 +0000 UTC Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.933293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.933328 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.933341 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.933358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:11 crc kubenswrapper[4705]: I0124 07:42:11.933370 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:11Z","lastTransitionTime":"2026-01-24T07:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.036176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.036212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.036223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.036241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.036254 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.138623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.138659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.138670 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.138686 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.138727 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.241371 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.241419 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.241429 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.241445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.241454 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.343584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.343631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.343642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.343660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.343670 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.446147 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.446208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.446221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.446246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.446266 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.547903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.547935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.547943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.547955 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.547965 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.575296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.575340 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.575382 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:12 crc kubenswrapper[4705]: E0124 07:42:12.575453 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.575301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:12 crc kubenswrapper[4705]: E0124 07:42:12.575617 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:12 crc kubenswrapper[4705]: E0124 07:42:12.575654 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:12 crc kubenswrapper[4705]: E0124 07:42:12.575744 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.650016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.650298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.650371 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.650476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.650548 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.753260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.753606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.753711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.753807 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.753932 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.855270 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:13:53.716207106 +0000 UTC Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.856376 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.856419 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.856431 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.856450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.856460 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.959183 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.959228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.959241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.959260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:12 crc kubenswrapper[4705]: I0124 07:42:12.959274 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:12Z","lastTransitionTime":"2026-01-24T07:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.062191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.062229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.062239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.062254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.062266 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.164400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.164448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.164461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.164481 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.164494 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.266587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.266902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.266989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.267065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.267129 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.368927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.368972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.368982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.368996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.369006 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.471479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.471520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.471532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.471547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.471558 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.574195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.574236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.574247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.574264 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.574275 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.676601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.676656 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.676672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.676691 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.676704 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.779582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.779617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.779626 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.779640 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.779652 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.856354 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:14:31.295212823 +0000 UTC Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.882187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.882239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.882250 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.882265 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:13 crc kubenswrapper[4705]: I0124 07:42:13.882276 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:13Z","lastTransitionTime":"2026-01-24T07:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.022347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.022402 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.022412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.022426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.022436 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.125251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.125300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.125314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.125332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.125343 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.227391 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.227433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.227446 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.227463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.227473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.329426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.329633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.329674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.329702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.329722 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.432844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.432893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.432919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.432940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.432964 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.536173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.536218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.536227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.536242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.536251 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.575377 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.575451 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.575403 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.575392 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:14 crc kubenswrapper[4705]: E0124 07:42:14.575769 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:14 crc kubenswrapper[4705]: E0124 07:42:14.575868 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:14 crc kubenswrapper[4705]: E0124 07:42:14.575958 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:14 crc kubenswrapper[4705]: E0124 07:42:14.576009 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.576755 4705 scope.go:117] "RemoveContainer" containerID="4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54" Jan 24 07:42:14 crc kubenswrapper[4705]: E0124 07:42:14.577003 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.638865 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.638909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.638918 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.638932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.638942 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.741898 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.742027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.742056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.742105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.742130 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.845115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.845145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.845153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.845167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.845175 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.857068 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:45:51.251154526 +0000 UTC Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.948622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.948659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.948668 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.948685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:14 crc kubenswrapper[4705]: I0124 07:42:14.948696 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:14Z","lastTransitionTime":"2026-01-24T07:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.051837 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.051864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.051874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.051888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.051898 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.154051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.154086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.154096 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.154110 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.154119 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.256952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.256985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.256994 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.257009 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.257018 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.359152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.359217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.359288 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.359316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.359336 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.461985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.462023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.462033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.462049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.462060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.544692 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/0.log" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.544736 4705 generic.go:334] "Generic (PLEG): container finished" podID="5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd" containerID="b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0" exitCode=1 Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.544761 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerDied","Data":"b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.545150 4705 scope.go:117] "RemoveContainer" containerID="b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.557965 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.564649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.564691 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.564705 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.564725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.564738 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.572751 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.586467 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.601107 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.617257 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.627045 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.654703 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.666927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.666964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.666975 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.666993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.667026 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.668128 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.679085 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.689455 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.702548 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.712139 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.729971 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.741804 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.751107 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.761113 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.771876 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.771925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.771944 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.771969 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.771987 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.774145 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.785534 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"2026-01-24T07:41:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4\\\\n2026-01-24T07:41:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4 to /host/opt/cni/bin/\\\\n2026-01-24T07:41:30Z [verbose] multus-daemon started\\\\n2026-01-24T07:41:30Z [verbose] Readiness Indicator file check\\\\n2026-01-24T07:42:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.796408 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.858038 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:03:54.295706173 +0000 UTC Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.874720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.874764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.874776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.874792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.874803 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.977134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.977188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.977200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.977218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:15 crc kubenswrapper[4705]: I0124 07:42:15.977269 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:15Z","lastTransitionTime":"2026-01-24T07:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.080091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.080132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.080144 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.080161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.080173 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.183400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.183463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.183477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.183503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.183514 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.286173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.286213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.286225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.286239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.286249 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.462105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.462138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.462148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.462163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.462173 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.549754 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/0.log" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.549839 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerStarted","Data":"5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.570911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.570946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.570956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.570973 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.570983 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.574222 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.575420 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.575482 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:16 crc kubenswrapper[4705]: E0124 07:42:16.576044 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.576196 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:16 crc kubenswrapper[4705]: E0124 07:42:16.576259 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.576348 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:16 crc kubenswrapper[4705]: E0124 07:42:16.576522 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:16 crc kubenswrapper[4705]: E0124 07:42:16.579939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.585332 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.602076 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.614233 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.627525 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"2026-01-24T07:41:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4\\\\n2026-01-24T07:41:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4 to /host/opt/cni/bin/\\\\n2026-01-24T07:41:30Z [verbose] multus-daemon started\\\\n2026-01-24T07:41:30Z [verbose] Readiness Indicator file check\\\\n2026-01-24T07:42:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.639876 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.649894 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.663087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.674075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.674114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.674126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.674144 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.674153 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.676101 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.691206 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.705274 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.716701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.730247 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.743456 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.754792 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.766776 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.776328 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.776364 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.776375 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.776389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.776402 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.777025 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.795852 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.807612 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.858878 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:48:54.721852953 +0000 UTC Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.878720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.878749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.878758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.878772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.878780 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.981023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.981124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.981134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.981148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:16 crc kubenswrapper[4705]: I0124 07:42:16.981159 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:16Z","lastTransitionTime":"2026-01-24T07:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.084110 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.084148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.084157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.084170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.084179 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.185867 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.185904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.185917 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.185932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.185943 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.288670 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.288707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.288715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.288731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.288741 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.390968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.391021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.391040 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.391067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.391106 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.493262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.493309 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.493320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.493335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.493346 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.595963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.596006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.596019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.596038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.596050 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.698724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.698778 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.698797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.698854 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.698879 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.800614 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.800641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.800649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.800661 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.800670 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.859282 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:09:19.429236699 +0000 UTC Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.903113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.903136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.903143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.903155 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:17 crc kubenswrapper[4705]: I0124 07:42:17.903164 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:17Z","lastTransitionTime":"2026-01-24T07:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.006002 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.006037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.006046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.006061 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.006070 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.108802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.108878 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.108893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.108913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.108928 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.210925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.210962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.210973 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.210989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.211000 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.313551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.313598 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.313609 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.313627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.313637 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.415584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.415869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.415961 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.416036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.416106 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.518962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.519007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.519023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.519041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.519052 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.574994 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.575075 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.575205 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:18 crc kubenswrapper[4705]: E0124 07:42:18.575220 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.575254 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:18 crc kubenswrapper[4705]: E0124 07:42:18.575391 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:18 crc kubenswrapper[4705]: E0124 07:42:18.575494 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:18 crc kubenswrapper[4705]: E0124 07:42:18.575586 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.621748 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.621797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.621808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.621845 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.621857 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.724761 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.724811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.724843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.724861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.724873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.827632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.827680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.827693 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.827709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.827719 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.859888 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:58:56.968170314 +0000 UTC Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.930902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.930963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.930989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.931018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:18 crc kubenswrapper[4705]: I0124 07:42:18.931038 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:18Z","lastTransitionTime":"2026-01-24T07:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.033497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.033531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.033544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.033559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.033570 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.135234 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.135283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.135297 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.135314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.135326 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.237128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.237161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.237168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.237180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.237190 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.354396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.354441 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.354453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.354489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.354501 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: E0124 07:42:19.367574 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.372286 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.372321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.372332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.372347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.372356 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: E0124 07:42:19.384658 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.387886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.387926 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.387938 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.387956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.387970 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: E0124 07:42:19.399712 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.403787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.403843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.403855 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.403868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.403877 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: E0124 07:42:19.416096 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.419532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.419559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.419569 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.419581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.419590 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: E0124 07:42:19.431664 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:19 crc kubenswrapper[4705]: E0124 07:42:19.431780 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.434884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.434915 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.434926 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.434959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.434972 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.538095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.538142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.538153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.538171 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.538181 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.640221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.640338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.640353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.640372 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.640385 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.742629 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.742674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.742683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.742697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.742707 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.846065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.846110 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.846119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.846137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.846148 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.860994 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:31:05.73421613 +0000 UTC Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.948857 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.948904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.948919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.948939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:19 crc kubenswrapper[4705]: I0124 07:42:19.948950 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:19Z","lastTransitionTime":"2026-01-24T07:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.050864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.050904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.050913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.050927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.050937 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.153706 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.153771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.153792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.153850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.153871 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.256415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.256465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.256476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.256494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.256508 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.358802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.358878 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.358890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.358908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.358919 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.461533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.461991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.462013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.462035 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.462049 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.564025 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.564055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.564066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.564081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.564093 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.575132 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.575196 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:20 crc kubenswrapper[4705]: E0124 07:42:20.575231 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.575328 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:20 crc kubenswrapper[4705]: E0124 07:42:20.575322 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:20 crc kubenswrapper[4705]: E0124 07:42:20.575378 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.575561 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:20 crc kubenswrapper[4705]: E0124 07:42:20.575707 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.666086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.666154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.666176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.666204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.666224 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.768722 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.768765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.768774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.768788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.768797 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.862091 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:19:03.944638376 +0000 UTC Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.871150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.871225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.871237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.871252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.871277 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.974151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.974199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.974211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.974225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:20 crc kubenswrapper[4705]: I0124 07:42:20.974237 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:20Z","lastTransitionTime":"2026-01-24T07:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.076204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.076232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.076239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.076251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.076261 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.178562 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.178615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.178626 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.178644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.178657 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.280545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.280580 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.280589 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.280602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.280611 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.383135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.383197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.383211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.383227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.383239 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.485535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.485616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.485642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.485674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.485697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.587522 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.587553 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.587564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.587578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.587588 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.591798 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.610864 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.633472 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.651648 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.671376 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.685151 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.689198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.689253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.689267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.689288 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.689323 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.700427 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.715150 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.730607 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.744651 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.765487 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.775266 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.792010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.792055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.792112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.792153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.792163 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.800588 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.813758 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.825376 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.838813 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.852905 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"2026-01-24T07:41:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4\\\\n2026-01-24T07:41:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4 to /host/opt/cni/bin/\\\\n2026-01-24T07:41:30Z [verbose] multus-daemon started\\\\n2026-01-24T07:41:30Z [verbose] Readiness Indicator file check\\\\n2026-01-24T07:42:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.862958 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:30:22.049760599 +0000 UTC Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.866142 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.877597 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:21Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.894272 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.894333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.894348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.894365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.894398 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.996736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.997136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.997146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.997160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:21 crc kubenswrapper[4705]: I0124 07:42:21.997170 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:21Z","lastTransitionTime":"2026-01-24T07:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.100285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.100328 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.100339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.100356 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.100367 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.203102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.203134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.203144 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.203157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.203165 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.305502 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.305533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.305541 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.305554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.305563 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.407886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.407939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.407954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.407971 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.407982 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.644233 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.644699 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:22 crc kubenswrapper[4705]: E0124 07:42:22.644799 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.644886 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.644916 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:22 crc kubenswrapper[4705]: E0124 07:42:22.644980 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:22 crc kubenswrapper[4705]: E0124 07:42:22.645027 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:22 crc kubenswrapper[4705]: E0124 07:42:22.645309 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.647292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.647323 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.647331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.647344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.647352 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.749410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.749444 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.749454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.749468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.749478 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.851974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.852018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.852027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.852041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.852051 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.863170 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:49:09.70455465 +0000 UTC Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.955089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.955169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.955194 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.955223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:22 crc kubenswrapper[4705]: I0124 07:42:22.955244 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:22Z","lastTransitionTime":"2026-01-24T07:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.057757 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.057799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.057810 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.057838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.057856 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.160715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.160770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.160782 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.160800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.160812 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.263483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.263525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.263535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.263550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.263561 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.366396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.366438 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.366447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.366462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.366471 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.468970 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.469000 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.469008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.469022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.469030 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.571665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.571722 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.571742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.571864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.571903 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.674453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.674540 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.674573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.674605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.674647 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.776759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.776849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.776886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.776907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.776919 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.864180 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:22:13.887033007 +0000 UTC Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.879470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.879538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.879563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.879589 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.879607 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.981459 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.981520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.981540 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.981565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:23 crc kubenswrapper[4705]: I0124 07:42:23.981583 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:23Z","lastTransitionTime":"2026-01-24T07:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.084709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.084772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.084792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.084856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.084882 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.187219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.187254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.187264 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.187280 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.187289 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.289949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.289997 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.290007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.290024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.290034 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.392659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.392715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.392733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.392764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.392779 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.494722 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.494769 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.494786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.494803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.494813 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.575425 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.575533 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.575623 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.575704 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:24 crc kubenswrapper[4705]: E0124 07:42:24.575875 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:24 crc kubenswrapper[4705]: E0124 07:42:24.576023 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:24 crc kubenswrapper[4705]: E0124 07:42:24.576098 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:24 crc kubenswrapper[4705]: E0124 07:42:24.576182 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.597754 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.597851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.597873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.597901 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.597919 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.700777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.700846 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.700862 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.700879 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.700890 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.803015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.803105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.803130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.803158 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.803179 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.864419 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:06:03.251835062 +0000 UTC Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.905729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.905772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.905788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.905804 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:24 crc kubenswrapper[4705]: I0124 07:42:24.905814 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:24Z","lastTransitionTime":"2026-01-24T07:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.007783 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.007843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.007861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.007892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.007902 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.110389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.110427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.110436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.110449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.110458 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.213200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.213236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.213249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.213263 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.213274 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.315667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.315710 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.315719 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.315732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.315742 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.417508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.417543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.417557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.417575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.417586 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.520721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.520773 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.520791 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.520856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.520883 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.623017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.623047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.623055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.623069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.623078 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.725692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.725739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.725751 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.725769 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.725780 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.828028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.828105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.828115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.828130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.828140 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.864599 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:07:56.198849508 +0000 UTC Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.931452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.931489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.931499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.931514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:25 crc kubenswrapper[4705]: I0124 07:42:25.931524 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:25Z","lastTransitionTime":"2026-01-24T07:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.034101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.034137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.034146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.034162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.034172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.136939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.136978 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.137004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.137019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.137029 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.239482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.239570 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.239595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.239622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.239640 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.341465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.341521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.341539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.341560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.341575 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.443940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.443982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.443993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.444010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.444020 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.546232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.546305 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.546316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.546335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.546696 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.575531 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.575578 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.575683 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.575752 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.575739 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.576138 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.576226 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.576305 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.576762 4705 scope.go:117] "RemoveContainer" containerID="4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.649475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.649530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.649545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.649564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.649578 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.694856 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.695087 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:30.695049502 +0000 UTC m=+149.414922950 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.695676 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.695866 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.695937 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.696040 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.696353 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:43:30.696221778 +0000 UTC m=+149.416095076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.696501 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 07:43:30.696476216 +0000 UTC m=+149.416349684 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.752918 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.752966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.752977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.752995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.753013 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.855766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.855837 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.855849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.855865 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.855877 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.865973 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:26:28.735682134 +0000 UTC Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.897867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.897961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898112 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898134 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898147 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898195 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 07:43:30.898179214 +0000 UTC m=+149.618052502 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898377 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898393 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898401 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:42:26 crc kubenswrapper[4705]: E0124 07:42:26.898427 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 07:43:30.898418871 +0000 UTC m=+149.618292159 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.958359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.958394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.958403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.958417 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:26 crc kubenswrapper[4705]: I0124 07:42:26.958426 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:26Z","lastTransitionTime":"2026-01-24T07:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.060658 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.060749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.060764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.060783 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.060795 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.162374 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.162409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.162417 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.162432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.162441 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.265263 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.265311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.265324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.265345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.265473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.368407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.368646 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.368656 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.368672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.368683 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.471515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.471561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.471573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.471588 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.471598 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.574576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.574631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.574644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.574660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.574671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.677202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.677244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.677261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.677280 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.677293 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.780178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.780250 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.780273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.780304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.780326 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.866918 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:23:38.210014154 +0000 UTC Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.882592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.882631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.882645 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.882663 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.882674 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.985581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.985632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.985644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.985661 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:27 crc kubenswrapper[4705]: I0124 07:42:27.985671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:27Z","lastTransitionTime":"2026-01-24T07:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.088832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.089157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.089168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.089185 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.089196 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:28Z","lastTransitionTime":"2026-01-24T07:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.388910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.388944 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.388953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.388969 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.388979 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:28Z","lastTransitionTime":"2026-01-24T07:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.491467 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.491515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.491526 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.491567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.491579 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:28Z","lastTransitionTime":"2026-01-24T07:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.574738 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.574769 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.574759 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.574747 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:28 crc kubenswrapper[4705]: E0124 07:42:28.574975 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:28 crc kubenswrapper[4705]: E0124 07:42:28.575166 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:28 crc kubenswrapper[4705]: E0124 07:42:28.575251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:28 crc kubenswrapper[4705]: E0124 07:42:28.575320 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.593551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.593599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.593610 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.593631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.593642 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:28Z","lastTransitionTime":"2026-01-24T07:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.724334 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.724388 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.724400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.724416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.724428 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:28Z","lastTransitionTime":"2026-01-24T07:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.727109 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/2.log" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.729965 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.730389 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.740415 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.766070 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.779412 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.798006 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.811648 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.826571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.826610 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.826622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.826639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.826651 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:28Z","lastTransitionTime":"2026-01-24T07:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.827378 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"2026-01-24T07:41:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4\\\\n2026-01-24T07:41:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4 to /host/opt/cni/bin/\\\\n2026-01-24T07:41:30Z [verbose] multus-daemon started\\\\n2026-01-24T07:41:30Z [verbose] Readiness Indicator file check\\\\n2026-01-24T07:42:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.842521 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:28Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.867239 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:23:59.15531497 +0000 UTC Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.931135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.931168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.931177 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.931192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:28 crc kubenswrapper[4705]: I0124 07:42:28.931210 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:28Z","lastTransitionTime":"2026-01-24T07:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.035917 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.035973 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.035992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.036015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.036032 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.094946 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.109957 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.125651 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.139015 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.156052 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.166580 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.169022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.169059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.169068 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.169082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.169092 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.179274 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.191975 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.203660 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.214794 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.223790 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.244210 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.272047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.272094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.272107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.272124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.272136 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.374194 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.374239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.374249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.374267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.374278 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.476746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.476805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.476885 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.476919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.476942 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.579224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.579275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.579285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.579302 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.579310 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.668646 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.668724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.668739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.668760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.668774 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: E0124 07:42:29.681385 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.684787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.684832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.684841 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.684853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.684862 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: E0124 07:42:29.696240 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.699802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.699847 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.699856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.699869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.699878 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: E0124 07:42:29.713028 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.747028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.747073 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.747084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.747105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.747114 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: E0124 07:42:29.761587 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.766161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.766204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.766213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.766228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.766238 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: E0124 07:42:29.776745 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:29 crc kubenswrapper[4705]: E0124 07:42:29.777012 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.778317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.778344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.778353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.778366 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.778374 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.867712 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:30:32.541831958 +0000 UTC Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.880542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.880592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.880607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.880627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.880642 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.983049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.983092 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.983104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.983119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:29 crc kubenswrapper[4705]: I0124 07:42:29.983129 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:29Z","lastTransitionTime":"2026-01-24T07:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.084602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.084647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.084660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.084676 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.084700 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.187392 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.187430 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.187439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.187451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.187460 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.289942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.289981 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.289992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.290010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.290021 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.392000 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.392046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.392060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.392078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.392090 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.494815 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.494861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.494868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.494882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.494891 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.575165 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:30 crc kubenswrapper[4705]: E0124 07:42:30.575339 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.575422 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:30 crc kubenswrapper[4705]: E0124 07:42:30.575491 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.575546 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:30 crc kubenswrapper[4705]: E0124 07:42:30.575605 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.575664 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:30 crc kubenswrapper[4705]: E0124 07:42:30.575722 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.597504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.597578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.597615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.597646 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.597667 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.700033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.700067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.700075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.700088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.700097 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.753461 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/3.log" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.754080 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/2.log" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.756942 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124" exitCode=1 Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.756986 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.757025 4705 scope.go:117] "RemoveContainer" containerID="4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.757906 4705 scope.go:117] "RemoveContainer" containerID="089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124" Jan 24 07:42:30 crc kubenswrapper[4705]: E0124 07:42:30.758089 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.771689 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.785626 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.796095 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.801868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.801905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.801915 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.801929 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.801938 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.809945 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.823219 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.836185 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.851246 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.864643 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.868060 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:48:40.543324365 +0000 UTC Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.885698 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.898649 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.903519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.903548 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.903557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.903570 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.903579 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:30Z","lastTransitionTime":"2026-01-24T07:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.916699 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:30Z\\\",\\\"message\\\":\\\"/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 07:42:29.756633 6728 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-55646444c4-trplf): added port \\\\u0026{name:openshift-network-diagnostics_network-check-source-55646444c4-trplf uuid:960d98b2-dc64-4e93-a4b6-9b19847af71e logicalSwitch:crc ips:[0xc00717e480] mac:[10 88 10 217 0 59] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.59/23] and MAC: 0a:58:0a:d9:00:3b\\\\nI0124 07:42:29.756643 6728 pods.go:252] [openshift-network-diagnostics/network-check-source-55646444c4-trplf] addLogicalPort took 5.390897ms, libovsdb time 570.067µs\\\\nI0124 07:42:29.756650 6728 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI0124 07:42:29.756654 6728 default_network_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-source\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.928086 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.941734 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.952961 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.966634 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"2026-01-24T07:41:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4\\\\n2026-01-24T07:41:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4 to /host/opt/cni/bin/\\\\n2026-01-24T07:41:30Z [verbose] multus-daemon started\\\\n2026-01-24T07:41:30Z [verbose] Readiness Indicator file check\\\\n2026-01-24T07:42:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.977934 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.987607 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:30 crc kubenswrapper[4705]: I0124 07:42:30.998391 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.005086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.005115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.005126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.005140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.005149 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.011400 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.107591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.107631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.107648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.107664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.107674 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.211015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.211053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.211064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.211086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.211099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.313500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.313533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.313542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.313555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.313564 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.416229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.416275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.416285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.416300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.416309 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.518607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.518673 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.518726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.518755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.518776 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.599042 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:30Z\\\",\\\"message\\\":\\\"/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 07:42:29.756633 6728 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-55646444c4-trplf): added port \\\\u0026{name:openshift-network-diagnostics_network-check-source-55646444c4-trplf uuid:960d98b2-dc64-4e93-a4b6-9b19847af71e logicalSwitch:crc ips:[0xc00717e480] mac:[10 88 10 217 0 59] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.59/23] and MAC: 0a:58:0a:d9:00:3b\\\\nI0124 07:42:29.756643 6728 pods.go:252] [openshift-network-diagnostics/network-check-source-55646444c4-trplf] addLogicalPort took 5.390897ms, libovsdb time 570.067µs\\\\nI0124 07:42:29.756650 6728 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI0124 07:42:29.756654 6728 default_network_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-source\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.612923 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.621690 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.621733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.621745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.621763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.621776 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.625423 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.636561 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.650989 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"2026-01-24T07:41:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4\\\\n2026-01-24T07:41:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4 to /host/opt/cni/bin/\\\\n2026-01-24T07:41:30Z [verbose] multus-daemon started\\\\n2026-01-24T07:41:30Z [verbose] Readiness Indicator file check\\\\n2026-01-24T07:42:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.661617 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.671186 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.684793 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.702348 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.714636 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.723341 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.723368 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.723376 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.723388 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.723396 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.729193 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.742998 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.754487 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.761347 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/3.log" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.768112 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.779743 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.792258 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.803892 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.823418 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.825317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.825349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.825359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.825374 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.825387 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.836664 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.868601 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:47:48.979156294 +0000 UTC Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.927926 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.927974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.927985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.928000 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:31 crc kubenswrapper[4705]: I0124 07:42:31.928009 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:31Z","lastTransitionTime":"2026-01-24T07:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.030679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.030733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.030748 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.030769 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.030784 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.133187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.133222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.133233 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.133249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.133262 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.236005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.236055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.236067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.236084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.236095 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.338173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.338203 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.338212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.338227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.338236 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.440889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.440949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.440959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.440972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.440981 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.543370 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.543428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.543447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.543473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.543491 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.574923 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.575009 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:32 crc kubenswrapper[4705]: E0124 07:42:32.575055 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.574949 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.575098 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:32 crc kubenswrapper[4705]: E0124 07:42:32.575143 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:32 crc kubenswrapper[4705]: E0124 07:42:32.575221 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:32 crc kubenswrapper[4705]: E0124 07:42:32.575307 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.645888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.645960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.645985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.646016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.646040 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.751863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.751901 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.751913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.751930 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.751941 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.855167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.855231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.855247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.855267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.855282 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.869569 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:54:53.070941317 +0000 UTC Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.958306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.958355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.958367 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.958384 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:32 crc kubenswrapper[4705]: I0124 07:42:32.958397 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:32Z","lastTransitionTime":"2026-01-24T07:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.061271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.061342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.061375 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.061396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.061410 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.164111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.164148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.164159 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.164176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.164188 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.266491 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.266535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.266544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.266556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.266565 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.368604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.368644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.368657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.368673 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.368682 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.471140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.471171 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.471180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.471196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.471206 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.573108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.573153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.573167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.573184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.573195 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.675039 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.675083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.675095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.675117 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.675129 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.777776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.777808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.777831 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.777843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.777854 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.870410 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:32:16.813220521 +0000 UTC Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.880196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.880237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.880249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.880266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.880279 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.982411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.982464 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.982474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.982490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:33 crc kubenswrapper[4705]: I0124 07:42:33.982500 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:33Z","lastTransitionTime":"2026-01-24T07:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.084962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.085003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.085013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.085028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.085038 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.188163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.188200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.188208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.188222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.188231 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.289880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.289921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.289930 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.289946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.289956 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.392010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.392070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.392083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.392097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.392107 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.494215 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.494256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.494269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.494285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.494295 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.575257 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.575321 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.575321 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.575287 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:34 crc kubenswrapper[4705]: E0124 07:42:34.575440 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:34 crc kubenswrapper[4705]: E0124 07:42:34.575494 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:34 crc kubenswrapper[4705]: E0124 07:42:34.575575 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:34 crc kubenswrapper[4705]: E0124 07:42:34.575686 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.596852 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.596889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.596897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.596914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.596923 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.698715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.698780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.698797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.698811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.698833 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.800740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.800787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.800796 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.800840 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.800859 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.871361 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:14:37.779206912 +0000 UTC Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.903105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.903152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.903163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.903187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:34 crc kubenswrapper[4705]: I0124 07:42:34.903199 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:34Z","lastTransitionTime":"2026-01-24T07:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.006053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.006115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.006133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.006156 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.006172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.108362 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.108402 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.108411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.108424 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.108432 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.211405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.211456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.211467 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.211483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.211495 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.314435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.314497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.314512 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.314533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.314549 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.417099 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.417172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.417208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.417241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.417263 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.519903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.519966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.519988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.520013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.520029 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.623130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.623182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.623198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.623219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.623236 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.725631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.725687 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.725698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.725716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.725729 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.827745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.827799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.827811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.827852 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.827866 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.872470 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:57:10.179082021 +0000 UTC Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.930207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.930254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.930263 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.930277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:35 crc kubenswrapper[4705]: I0124 07:42:35.930285 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:35Z","lastTransitionTime":"2026-01-24T07:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.032759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.032838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.032861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.032893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.032912 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.135711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.135752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.135763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.135778 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.135788 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.238267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.238308 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.238329 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.238347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.238360 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.341372 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.341421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.341436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.341453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.341464 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.445322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.445387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.445408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.445436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.445456 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.548303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.548354 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.548367 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.548385 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.548397 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.575079 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.575213 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.575271 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:36 crc kubenswrapper[4705]: E0124 07:42:36.575233 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:36 crc kubenswrapper[4705]: E0124 07:42:36.575319 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:36 crc kubenswrapper[4705]: E0124 07:42:36.575427 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.575674 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:36 crc kubenswrapper[4705]: E0124 07:42:36.575777 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.651530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.651563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.651574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.651618 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.651635 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.754521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.754572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.754584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.754602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.754615 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.857311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.857351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.857361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.857374 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.857386 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.872659 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:48:29.19335135 +0000 UTC Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.960027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.960078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.960088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.960110 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:36 crc kubenswrapper[4705]: I0124 07:42:36.960514 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:36Z","lastTransitionTime":"2026-01-24T07:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.063389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.063434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.063447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.063465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.063476 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.165895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.165980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.166002 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.166034 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.166057 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.268969 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.269045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.269060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.269084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.269095 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.371397 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.371456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.371468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.371497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.371523 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.473392 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.473431 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.473441 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.473458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.473468 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.575887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.575949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.575963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.576000 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.576015 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.678758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.678800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.678814 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.678854 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.678868 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.781848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.781908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.781920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.781940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.781953 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.872846 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:28:19.394166545 +0000 UTC Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.885338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.885377 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.885389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.885409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.885425 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.987984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.988395 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.988421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.988453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:37 crc kubenswrapper[4705]: I0124 07:42:37.988476 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:37Z","lastTransitionTime":"2026-01-24T07:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.090965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.091003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.091011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.091026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.091034 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.193639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.193691 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.193712 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.193733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.193751 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.295856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.295913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.295927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.295945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.295956 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.398758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.398799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.398815 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.398849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.398860 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.502327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.502377 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.502391 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.502411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.502425 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.575326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:38 crc kubenswrapper[4705]: E0124 07:42:38.575494 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.576188 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.576259 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:38 crc kubenswrapper[4705]: E0124 07:42:38.576390 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.576704 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:38 crc kubenswrapper[4705]: E0124 07:42:38.576868 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:38 crc kubenswrapper[4705]: E0124 07:42:38.577125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.604464 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.604508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.604520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.604538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.604551 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.707048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.707086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.707094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.707108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.707117 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.808610 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.808642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.808652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.808665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.808673 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.873955 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:30:24.465747942 +0000 UTC Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.912096 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.912136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.912146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.912162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:38 crc kubenswrapper[4705]: I0124 07:42:38.912172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:38Z","lastTransitionTime":"2026-01-24T07:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.014978 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.015023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.015033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.015046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.015057 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.117924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.117963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.117974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.117990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.118001 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.220909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.220943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.220999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.221024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.221037 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.323980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.324040 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.324055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.324076 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.324090 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.426413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.426485 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.426517 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.426547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.426572 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.528904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.528964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.528982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.529010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.529032 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.631048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.631113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.631123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.631136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.631145 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.733401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.733459 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.733469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.733482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.733489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.835196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.835234 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.835247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.835267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.835280 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.874401 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:43:20.172667791 +0000 UTC Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.937666 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.937777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.937853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.937873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:39 crc kubenswrapper[4705]: I0124 07:42:39.937887 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:39Z","lastTransitionTime":"2026-01-24T07:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.041068 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.041147 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.041173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.041203 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.041225 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.120221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.120256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.120265 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.120278 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.120289 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.135165 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.139170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.139212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.139223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.139240 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.139253 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.152544 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.156673 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.156713 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.156729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.156770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.156800 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.169524 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.173201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.173275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.173292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.173311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.173322 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.187205 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.190759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.190799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.190808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.190835 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.190845 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.204560 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c57bc973-aee3-462e-9560-e18c43dd1277\\\",\\\"systemUUID\\\":\\\"8dcf50f1-4fbc-440d-b092-936c9603c61c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.204697 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.206182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.206214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.206224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.206241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.206253 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.308924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.308957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.308966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.308978 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.308986 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.411907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.411953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.411968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.411988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.412002 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.514496 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.514534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.514546 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.514562 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.514573 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.597919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.598041 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.598091 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.598240 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.598380 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.598544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.598611 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:40 crc kubenswrapper[4705]: E0124 07:42:40.598699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.616998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.617051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.617067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.617090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.617105 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.719737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.719770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.719783 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.719800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.719812 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.822737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.822793 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.822813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.822897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.822920 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.875233 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:17:54.090121559 +0000 UTC Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.925434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.925669 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.925693 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.925718 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:40 crc kubenswrapper[4705]: I0124 07:42:40.925736 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:40Z","lastTransitionTime":"2026-01-24T07:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.029109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.029184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.029203 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.029230 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.029249 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.132432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.132468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.132480 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.132497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.132509 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.236341 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.236391 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.236405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.236424 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.236437 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.338843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.338878 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.338890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.338904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.338915 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.441752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.441790 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.441803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.441818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.441845 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.544606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.544654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.544666 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.544682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.544693 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.588595 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.603885 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.620133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.637202 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbn67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2164d9f8-41b8-4380-b070-76e772189f1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba572bc0fc9d445440e460761791470904379c61693b412babb75d2a88e88f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67a364bcb48102a38aa291940c8d1df2e6f458755d1f0883ae350c79e689ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080f61096a37b7f0b5918ef6759d5461e1f570c1136d84d2e339cac3d9844da3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07895a89581e77a17dcb3ed647e10830ca67e5a706d3dcdf011ab9fa38a1482d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d999be27b118503f25cabae7d71ed8fb32d963690b79b7b0eb6fc5d9477826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be06a02cc59738c3be387e4adadccc02ceaa217374ec852ef4f7c08966e72539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a811eeddba7c8e20a9289ffd308fb1848534bdb65deb2c9a6b4859aac2c805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxmq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbn67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.647251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.647296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.647311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.647327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.647338 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.652252 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37a919919cad22f4fa60303cb3e7a398cbdc86b65bd37925aca8e2cc3b8ba80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5db19b38e15d4036b5e8e37af244f4651659359469bc5c9dffee9feb3ab20dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.667509 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mxnng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft2qm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mxnng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.692984 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f5a05c-71e2-4ee8-bed2-53e60e58882d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62015fc1c1d66288dff9bb5a6c2e5be5aee933cb43f1504135e429cf49b2a94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ae0e8422be23a8925137035db2f303933368fa75bb2619c6034cc97f2ba1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1922e4d625e9520df7c6785e541d770a4273159f49013ea05f3beea3dd0c0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c6444ea83311b000c3617db7c113afa4c422e5c290a36c257bbea8973761aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95521b58197ebe0d933741d229ca7d9956a270b8f25ad68d1f5f16526fdd8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea679b6b2ed192a3514e9760bd4a0edfc94f0aed9d4a7aa20ba230499830cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191d79eaa3d24dff6d3ec1e63f6bea0a16314f915dc79b04a15db02846619bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e5fdf0a6fd575a074c8b2eb2ff3b4456e72b493eee1ec9a1a719329a7da5db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.707635 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.720598 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a367bfbe1a4ad88c764854671a6e336a5ccf8b9f1901663c7092f02edcd032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.734780 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.748503 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769240476\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769240476\\\\\\\\\\\\\\\" (2026-01-24 06:41:16 +0000 UTC to 2027-01-24 06:41:16 +0000 UTC (now=2026-01-24 07:41:21.775483256 +0000 UTC))\\\\\\\"\\\\nI0124 07:41:21.775517 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0124 07:41:21.775533 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0124 07:41:21.775550 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775565 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0124 07:41:21.775587 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15276412/tls.crt::/tmp/serving-cert-15276412/tls.key\\\\\\\"\\\\nI0124 07:41:21.775682 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0124 07:41:21.777504 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0124 07:41:21.777524 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0124 07:41:21.777735 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777748 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0124 07:41:21.777763 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0124 07:41:21.777772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF0124 07:41:21.779720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.750033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.750072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.750084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.750102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.750117 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.758761 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w9jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31854c6e-066f-4612-88b4-1e156b4770e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://814aef632daf2d0644b14076c42317d7a4c1c8733d3a69672909e3bab14f6e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rqtq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w9jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.780789 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea9a3b17ac62182d6395a015595aeb72eca918594599af4ce3247387bae4c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:41:59Z\\\",\\\"message\\\":\\\"40675 6322 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 07:41:59.840681 6322 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 07:41:59.840748 6322 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840775 6322 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840932 6322 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 07:41:59.840998 6322 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841201 6322 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.840749 6322 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 07:41:59.841282 6322 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 07:41:59.841331 6322 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.841882 6322 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 07:41:59.842430 6322 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:30Z\\\",\\\"message\\\":\\\"/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 07:42:29.756633 6728 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-55646444c4-trplf): added port \\\\u0026{name:openshift-network-diagnostics_network-check-source-55646444c4-trplf uuid:960d98b2-dc64-4e93-a4b6-9b19847af71e logicalSwitch:crc ips:[0xc00717e480] mac:[10 88 10 217 0 59] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.59/23] and MAC: 0a:58:0a:d9:00:3b\\\\nI0124 07:42:29.756643 6728 pods.go:252] [openshift-network-diagnostics/network-check-source-55646444c4-trplf] addLogicalPort took 5.390897ms, libovsdb time 570.067µs\\\\nI0124 07:42:29.756650 6728 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI0124 07:42:29.756654 6728 default_network_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-source\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcjx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-js42b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.792567 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2d9ba7a-8541-4f85-b5e1-ae79e882761c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1328eb5d4a252f8ab26edd1c6271a6c28ab29f298078a15252c159646cc8a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa710434f940f887ec6ef15c0536255137218da6d7682ad5a22ec8953dabb19e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsmpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qsww7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.802783 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kzgk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a5e20c-e474-4205-bb02-883ca9bb71f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512065ce1d95969396e455dcc8c754579e73ac84ab66ea6f0821cc9a12f3a1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc2zb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kzgk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.814048 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d7b012-83a2-4643-9c87-028b3e1a5ff5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344cb802c68b9e25afbc22949d27bd8416ae576ba6d6bd070e9281e0a6975bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d358da385a0af4a5ff14c3b23d2ce504be52d63d9b1808edfc78ed6e12bf9402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba7d064c3ebfe719451fea0f2144622154aac8eb18df6666adb9d5b464db49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d40bd8f87cae0f109929ef54d2b4986b96b786dc3e47f0118b96f69f6df5d9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.825507 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.838585 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9wbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T07:42:15Z\\\",\\\"message\\\":\\\"2026-01-24T07:41:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4\\\\n2026-01-24T07:41:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_58e09f55-5d8d-401c-8549-e5906469f9a4 to /host/opt/cni/bin/\\\\n2026-01-24T07:41:30Z [verbose] multus-daemon started\\\\n2026-01-24T07:41:30Z [verbose] Readiness Indicator file check\\\\n2026-01-24T07:42:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mqzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9wbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.852333 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7b3b969-5164-4f10-8758-72b7e2f4b762\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a11239207d5a760a6ac1d66268a9aa52fd29cc6bfee0a8c2408494e6746d3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6jvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dxqp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:41Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.852547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.852599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.852615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.852638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.852655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.876135 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:44:36.057839923 +0000 UTC Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.955771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.955872 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.955890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.955916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:41 crc kubenswrapper[4705]: I0124 07:42:41.955933 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:41Z","lastTransitionTime":"2026-01-24T07:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.058545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.058591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.058604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.058620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.058630 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.161497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.161528 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.161542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.161557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.161566 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.263970 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.264030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.264042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.264061 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.264073 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.366142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.366178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.366191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.366207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.366221 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.468656 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.468719 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.468738 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.468767 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.468783 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.570933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.570967 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.570976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.570992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.571002 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.575423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.575648 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.575657 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.575760 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:42 crc kubenswrapper[4705]: E0124 07:42:42.575929 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:42 crc kubenswrapper[4705]: E0124 07:42:42.576062 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:42 crc kubenswrapper[4705]: E0124 07:42:42.576261 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:42 crc kubenswrapper[4705]: E0124 07:42:42.576561 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.576732 4705 scope.go:117] "RemoveContainer" containerID="089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124" Jan 24 07:42:42 crc kubenswrapper[4705]: E0124 07:42:42.576873 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.587114 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e9e3fb-4cb5-4a4e-9a26-b5f4964ccfbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bac307186907ea4fde1cbf870e0ab89488fae41b12cad0fa69e874d5272f1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fddcd9892b584de3a8f4c36da29ef09ae73e5f82e0603c25d8b362b523798383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T07:41:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.601812 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2403abfc-8829-43c9-8c02-304a9ab34a4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4469d7c7c5c51f51fc899801fd35ea756402ba646d41cd5700973e69e33f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b9a0eb3bf2cd5e2ac48df9edac6b457ec1abb0af06b698c0db201ab0d400fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50cef696ae0924a6812bef782f8bf767ac37b61e7e91a78e83accf766dd63240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T07:41:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.612613 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T07:41:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://168b4d5fb6279cbe384c66eedf0206dfdb9983987038a8c619e8c559d775f363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T07:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T07:42:42Z is after 2025-08-24T17:21:41Z" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.637649 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gbn67" podStartSLOduration=80.637629278 podStartE2EDuration="1m20.637629278s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.636691218 +0000 UTC m=+101.356564506" watchObservedRunningTime="2026-01-24 07:42:42.637629278 +0000 UTC m=+101.357502566" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.666244 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=79.666220912 podStartE2EDuration="1m19.666220912s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.664807818 +0000 UTC m=+101.384681136" watchObservedRunningTime="2026-01-24 07:42:42.666220912 +0000 UTC m=+101.386094200" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.673750 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.674077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.674174 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.674281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.674364 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.768108 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.768089708 podStartE2EDuration="1m19.768089708s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.767592692 +0000 UTC m=+101.487466000" watchObservedRunningTime="2026-01-24 07:42:42.768089708 +0000 UTC m=+101.487962996" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.776780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.776837 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.776850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.776868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.776880 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.833792 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-w9jkp" podStartSLOduration=81.833776123 podStartE2EDuration="1m21.833776123s" podCreationTimestamp="2026-01-24 07:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.786635502 +0000 UTC m=+101.506508800" watchObservedRunningTime="2026-01-24 07:42:42.833776123 +0000 UTC m=+101.553649411" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.849616 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qsww7" podStartSLOduration=78.849600783 podStartE2EDuration="1m18.849600783s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.848982964 +0000 UTC m=+101.568856252" watchObservedRunningTime="2026-01-24 07:42:42.849600783 +0000 UTC m=+101.569474071" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.868391 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.868370234 podStartE2EDuration="48.868370234s" podCreationTimestamp="2026-01-24 07:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.86792149 +0000 UTC m=+101.587794778" watchObservedRunningTime="2026-01-24 07:42:42.868370234 +0000 UTC m=+101.588243522" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.876647 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:58:35.316195337 +0000 UTC Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.878455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.878557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.878621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.878688 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.878753 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.897444 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h9wbv" podStartSLOduration=80.897426964 podStartE2EDuration="1m20.897426964s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.89663475 +0000 UTC m=+101.616508058" watchObservedRunningTime="2026-01-24 07:42:42.897426964 +0000 UTC m=+101.617300262" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.918856 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kzgk6" podStartSLOduration=80.918839938 podStartE2EDuration="1m20.918839938s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.918732474 +0000 UTC m=+101.638605772" watchObservedRunningTime="2026-01-24 07:42:42.918839938 +0000 UTC m=+101.638713216" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.919487 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podStartSLOduration=80.919481087 podStartE2EDuration="1m20.919481087s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:42.909080665 +0000 UTC m=+101.628953953" watchObservedRunningTime="2026-01-24 07:42:42.919481087 +0000 UTC m=+101.639354375" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.981244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.981300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.981311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.981328 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:42 crc kubenswrapper[4705]: I0124 07:42:42.981341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:42Z","lastTransitionTime":"2026-01-24T07:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.083271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.083308 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.083318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.083332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.083341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.185259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.185306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.185317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.185336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.185347 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.228304 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:43 crc kubenswrapper[4705]: E0124 07:42:43.228750 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:42:43 crc kubenswrapper[4705]: E0124 07:42:43.228932 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs podName:aaa7a0f6-16ad-42c1-b1e2-6c080807fda1 nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.228914243 +0000 UTC m=+165.948787611 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs") pod "network-metrics-daemon-mxnng" (UID: "aaa7a0f6-16ad-42c1-b1e2-6c080807fda1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.288197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.288252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.288269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.288290 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.288302 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.390158 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.390188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.390197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.390210 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.390219 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.492444 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.492474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.492481 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.492494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.492502 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.594794 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.594916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.594940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.594975 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.595001 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.696995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.697031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.697040 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.697052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.697061 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.799305 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.799343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.799352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.799370 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.799383 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.877589 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:41:53.803843602 +0000 UTC Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.901333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.901364 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.901371 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.901384 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:43 crc kubenswrapper[4705]: I0124 07:42:43.901392 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:43Z","lastTransitionTime":"2026-01-24T07:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.004137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.004177 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.004187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.004201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.004210 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.105968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.106008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.106017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.106032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.106040 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.208903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.208946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.208956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.208973 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.208984 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.311170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.311226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.311238 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.311254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.311265 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.413152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.413190 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.413204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.413220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.413230 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.515900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.515962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.515980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.516003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.516020 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.574797 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.574925 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.574925 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.574978 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:44 crc kubenswrapper[4705]: E0124 07:42:44.575029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:44 crc kubenswrapper[4705]: E0124 07:42:44.575125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:44 crc kubenswrapper[4705]: E0124 07:42:44.575175 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:44 crc kubenswrapper[4705]: E0124 07:42:44.575201 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.618751 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.618784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.618792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.618806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.618817 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.720367 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.720418 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.720432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.720452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.720463 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.823128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.823208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.823230 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.823262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.823285 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.878580 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:23:06.94897844 +0000 UTC Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.925817 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.925888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.925899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.925915 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:44 crc kubenswrapper[4705]: I0124 07:42:44.925929 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:44Z","lastTransitionTime":"2026-01-24T07:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.028392 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.028445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.028461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.028480 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.028493 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.130569 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.130600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.130608 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.130620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.130628 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.232994 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.233037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.233048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.233064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.233075 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.334875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.334910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.334922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.334936 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.334946 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.436856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.436902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.436915 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.436936 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.436952 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.539200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.539263 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.539276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.539293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.539305 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.641857 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.641892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.641900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.641913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.641921 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.745706 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.745742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.745754 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.745774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.745787 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.848401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.848460 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.848477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.848502 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.848520 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.878848 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:59:48.284071092 +0000 UTC Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.951172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.951218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.951226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.951239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:45 crc kubenswrapper[4705]: I0124 07:42:45.951250 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:45Z","lastTransitionTime":"2026-01-24T07:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.053767 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.053818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.053877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.053901 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.053918 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.155890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.155939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.155955 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.155977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.155998 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.259631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.259686 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.259722 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.259764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.259788 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.363103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.363194 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.363213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.363239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.363257 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.466390 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.466458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.466480 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.466513 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.466536 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.569089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.569152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.569176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.569206 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.569227 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.575631 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.575671 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.575732 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:46 crc kubenswrapper[4705]: E0124 07:42:46.575784 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.575811 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:46 crc kubenswrapper[4705]: E0124 07:42:46.575987 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:46 crc kubenswrapper[4705]: E0124 07:42:46.576027 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:46 crc kubenswrapper[4705]: E0124 07:42:46.576195 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.672399 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.672471 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.672508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.672536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.672556 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.775722 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.775778 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.775795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.775844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.775861 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.878452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.878505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.878522 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.878544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.878565 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.879352 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:09:47.322729137 +0000 UTC Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.981567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.981623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.981638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.981661 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:46 crc kubenswrapper[4705]: I0124 07:42:46.981676 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:46Z","lastTransitionTime":"2026-01-24T07:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.085005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.085087 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.085113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.085151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.085178 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.188544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.188617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.188632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.188649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.188661 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.299547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.299586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.299595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.299610 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.299618 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.405954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.406022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.406037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.406058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.406075 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.507870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.507940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.507950 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.507963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.507973 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.609910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.610042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.610061 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.610078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.610089 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.712377 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.712416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.712425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.712438 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.712447 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.815397 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.815454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.815466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.815482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.815494 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.880145 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:16:54.4826116 +0000 UTC Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.918236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.918292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.918301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.918316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:47 crc kubenswrapper[4705]: I0124 07:42:47.918325 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:47Z","lastTransitionTime":"2026-01-24T07:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.020667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.020692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.020700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.020713 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.020722 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.122899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.122941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.122964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.122984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.122999 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.224610 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.224649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.224658 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.224673 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.224684 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.327708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.327774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.327795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.327870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.327903 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.430646 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.430689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.430701 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.430717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.430728 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.532911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.532971 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.532982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.533004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.533021 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.575155 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.575230 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:48 crc kubenswrapper[4705]: E0124 07:42:48.575303 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.575184 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:48 crc kubenswrapper[4705]: E0124 07:42:48.575472 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:48 crc kubenswrapper[4705]: E0124 07:42:48.575627 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.575856 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:48 crc kubenswrapper[4705]: E0124 07:42:48.576021 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.635742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.635806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.635836 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.635896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.635927 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.739111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.739172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.739189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.739213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.739232 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.842630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.842674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.842683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.842698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.842708 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.880954 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:13:09.872454003 +0000 UTC Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.945989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.946039 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.946051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.946068 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:48 crc kubenswrapper[4705]: I0124 07:42:48.946087 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:48Z","lastTransitionTime":"2026-01-24T07:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.047913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.047954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.047963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.047978 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.047988 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.150031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.150069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.150083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.150100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.150110 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.252987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.253040 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.253057 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.253089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.253105 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.355668 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.355718 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.355730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.355745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.355755 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.458132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.458182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.458197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.458218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.458235 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.560306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.560373 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.560387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.560406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.560420 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.662630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.662675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.662686 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.662699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.662713 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.765457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.765504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.765519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.765539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.765554 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.868004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.868053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.868067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.868087 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.868099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.881056 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:22:21.941789953 +0000 UTC Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.970957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.971013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.971022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.971036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:49 crc kubenswrapper[4705]: I0124 07:42:49.971064 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:49Z","lastTransitionTime":"2026-01-24T07:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.074196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.088715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.088772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.088809 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.088870 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:50Z","lastTransitionTime":"2026-01-24T07:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.191798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.191938 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.191958 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.191988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.192007 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:50Z","lastTransitionTime":"2026-01-24T07:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.294685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.294750 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.294770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.294799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.294853 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:50Z","lastTransitionTime":"2026-01-24T07:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.380412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.380447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.380458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.380474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.380486 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T07:42:50Z","lastTransitionTime":"2026-01-24T07:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.446488 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s"] Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.446971 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.449268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.450564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.450794 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.451012 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.493856 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.493824743 podStartE2EDuration="39.493824743s" podCreationTimestamp="2026-01-24 07:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:50.493525484 +0000 UTC m=+109.213398812" watchObservedRunningTime="2026-01-24 07:42:50.493824743 +0000 UTC m=+109.213698031" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.508319 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.508294901 podStartE2EDuration="1m29.508294901s" podCreationTimestamp="2026-01-24 07:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:50.507589579 +0000 UTC m=+109.227462897" watchObservedRunningTime="2026-01-24 07:42:50.508294901 +0000 UTC m=+109.228168189" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.549794 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.549871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.549894 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.549914 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.549995 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.575350 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.575386 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.575393 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.575351 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:50 crc kubenswrapper[4705]: E0124 07:42:50.575492 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:50 crc kubenswrapper[4705]: E0124 07:42:50.575570 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:50 crc kubenswrapper[4705]: E0124 07:42:50.575664 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:50 crc kubenswrapper[4705]: E0124 07:42:50.575982 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.651595 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.652354 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.652420 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.652458 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.652495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.652540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.652608 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.654670 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.663580 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.673991 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wpj8s\" (UID: \"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.764932 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" Jan 24 07:42:50 crc kubenswrapper[4705]: W0124 07:42:50.776342 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ba5d6fd_c0fd_41a3_99b5_1fd0102869b3.slice/crio-bd363f00babc3ed9bfec283bf165ad77d5f3df4fb944990bc315420689892c89 WatchSource:0}: Error finding container bd363f00babc3ed9bfec283bf165ad77d5f3df4fb944990bc315420689892c89: Status 404 returned error can't find the container with id bd363f00babc3ed9bfec283bf165ad77d5f3df4fb944990bc315420689892c89 Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.834472 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" event={"ID":"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3","Type":"ContainerStarted","Data":"bd363f00babc3ed9bfec283bf165ad77d5f3df4fb944990bc315420689892c89"} Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.881548 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:36:45.483323328 +0000 UTC Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.882436 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 24 07:42:50 crc kubenswrapper[4705]: I0124 07:42:50.890643 4705 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 07:42:51 crc kubenswrapper[4705]: I0124 07:42:51.838696 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" event={"ID":"7ba5d6fd-c0fd-41a3-99b5-1fd0102869b3","Type":"ContainerStarted","Data":"9574b11743556fc07959d73ee7bd37d906ca69bd3d4be6dbb3600c2b9fc43b00"} Jan 24 07:42:51 crc kubenswrapper[4705]: I0124 07:42:51.852129 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wpj8s" podStartSLOduration=89.852111568 podStartE2EDuration="1m29.852111568s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:42:51.851504099 +0000 UTC m=+110.571377387" watchObservedRunningTime="2026-01-24 07:42:51.852111568 +0000 UTC m=+110.571984856" Jan 24 07:42:52 crc kubenswrapper[4705]: I0124 07:42:52.575663 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:52 crc kubenswrapper[4705]: E0124 07:42:52.576519 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:52 crc kubenswrapper[4705]: I0124 07:42:52.575744 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:52 crc kubenswrapper[4705]: E0124 07:42:52.576786 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:52 crc kubenswrapper[4705]: I0124 07:42:52.575675 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:52 crc kubenswrapper[4705]: E0124 07:42:52.577081 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:52 crc kubenswrapper[4705]: I0124 07:42:52.575790 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:52 crc kubenswrapper[4705]: E0124 07:42:52.577353 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:54 crc kubenswrapper[4705]: I0124 07:42:54.575142 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:54 crc kubenswrapper[4705]: E0124 07:42:54.575336 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:54 crc kubenswrapper[4705]: I0124 07:42:54.575639 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:54 crc kubenswrapper[4705]: E0124 07:42:54.575734 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:54 crc kubenswrapper[4705]: I0124 07:42:54.575977 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:54 crc kubenswrapper[4705]: E0124 07:42:54.576081 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:54 crc kubenswrapper[4705]: I0124 07:42:54.576301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:54 crc kubenswrapper[4705]: E0124 07:42:54.576391 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:56 crc kubenswrapper[4705]: I0124 07:42:56.574709 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:56 crc kubenswrapper[4705]: I0124 07:42:56.574838 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:56 crc kubenswrapper[4705]: I0124 07:42:56.574952 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:56 crc kubenswrapper[4705]: I0124 07:42:56.574952 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:56 crc kubenswrapper[4705]: E0124 07:42:56.574946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:56 crc kubenswrapper[4705]: E0124 07:42:56.575037 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:42:56 crc kubenswrapper[4705]: E0124 07:42:56.575133 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:56 crc kubenswrapper[4705]: E0124 07:42:56.575209 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:57 crc kubenswrapper[4705]: I0124 07:42:57.575994 4705 scope.go:117] "RemoveContainer" containerID="089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124" Jan 24 07:42:57 crc kubenswrapper[4705]: E0124 07:42:57.576265 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:42:58 crc kubenswrapper[4705]: I0124 07:42:58.574894 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:42:58 crc kubenswrapper[4705]: I0124 07:42:58.575005 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:42:58 crc kubenswrapper[4705]: E0124 07:42:58.575033 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:42:58 crc kubenswrapper[4705]: I0124 07:42:58.575066 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:42:58 crc kubenswrapper[4705]: E0124 07:42:58.575223 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:42:58 crc kubenswrapper[4705]: E0124 07:42:58.575312 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:42:58 crc kubenswrapper[4705]: I0124 07:42:58.575473 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:42:58 crc kubenswrapper[4705]: E0124 07:42:58.575637 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:00 crc kubenswrapper[4705]: I0124 07:43:00.574984 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:00 crc kubenswrapper[4705]: E0124 07:43:00.575149 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:00 crc kubenswrapper[4705]: I0124 07:43:00.575219 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:00 crc kubenswrapper[4705]: I0124 07:43:00.575233 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:00 crc kubenswrapper[4705]: E0124 07:43:00.575355 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:00 crc kubenswrapper[4705]: I0124 07:43:00.575946 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:00 crc kubenswrapper[4705]: E0124 07:43:00.575957 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:00 crc kubenswrapper[4705]: E0124 07:43:00.576049 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:01 crc kubenswrapper[4705]: E0124 07:43:01.624746 4705 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 24 07:43:01 crc kubenswrapper[4705]: E0124 07:43:01.692965 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:43:02 crc kubenswrapper[4705]: I0124 07:43:02.575282 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:02 crc kubenswrapper[4705]: I0124 07:43:02.575373 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:02 crc kubenswrapper[4705]: I0124 07:43:02.575295 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:02 crc kubenswrapper[4705]: E0124 07:43:02.575440 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:02 crc kubenswrapper[4705]: E0124 07:43:02.575496 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:02 crc kubenswrapper[4705]: E0124 07:43:02.575589 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:02 crc kubenswrapper[4705]: I0124 07:43:02.575807 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:02 crc kubenswrapper[4705]: E0124 07:43:02.576015 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:03 crc kubenswrapper[4705]: I0124 07:43:03.881745 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/1.log" Jan 24 07:43:03 crc kubenswrapper[4705]: I0124 07:43:03.882806 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/0.log" Jan 24 07:43:03 crc kubenswrapper[4705]: I0124 07:43:03.882871 4705 generic.go:334] "Generic (PLEG): container finished" podID="5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd" containerID="5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8" exitCode=1 Jan 24 07:43:03 crc kubenswrapper[4705]: I0124 07:43:03.882905 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerDied","Data":"5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8"} Jan 24 07:43:03 crc kubenswrapper[4705]: I0124 07:43:03.882941 4705 scope.go:117] "RemoveContainer" containerID="b9d523a8fb4f606fd2c8f40e13e7e3671e5e4404b313b9a557579dedb88177e0" Jan 24 07:43:03 crc kubenswrapper[4705]: I0124 07:43:03.883323 4705 scope.go:117] "RemoveContainer" containerID="5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8" Jan 24 07:43:03 crc kubenswrapper[4705]: E0124 07:43:03.883542 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h9wbv_openshift-multus(5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd)\"" pod="openshift-multus/multus-h9wbv" podUID="5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd" Jan 24 07:43:04 crc kubenswrapper[4705]: I0124 07:43:04.575750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:04 crc kubenswrapper[4705]: E0124 07:43:04.576251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:04 crc kubenswrapper[4705]: I0124 07:43:04.575802 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:04 crc kubenswrapper[4705]: I0124 07:43:04.575777 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:04 crc kubenswrapper[4705]: E0124 07:43:04.576330 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:04 crc kubenswrapper[4705]: I0124 07:43:04.575843 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:04 crc kubenswrapper[4705]: E0124 07:43:04.576433 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:04 crc kubenswrapper[4705]: E0124 07:43:04.576510 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:04 crc kubenswrapper[4705]: I0124 07:43:04.888077 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/1.log" Jan 24 07:43:06 crc kubenswrapper[4705]: I0124 07:43:06.575511 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:06 crc kubenswrapper[4705]: E0124 07:43:06.575631 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:06 crc kubenswrapper[4705]: I0124 07:43:06.575504 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:06 crc kubenswrapper[4705]: I0124 07:43:06.575711 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:06 crc kubenswrapper[4705]: I0124 07:43:06.575516 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:06 crc kubenswrapper[4705]: E0124 07:43:06.575946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:06 crc kubenswrapper[4705]: E0124 07:43:06.576087 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:06 crc kubenswrapper[4705]: E0124 07:43:06.576187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:06 crc kubenswrapper[4705]: E0124 07:43:06.695490 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:43:08 crc kubenswrapper[4705]: I0124 07:43:08.575536 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:08 crc kubenswrapper[4705]: I0124 07:43:08.575562 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:08 crc kubenswrapper[4705]: E0124 07:43:08.575664 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:08 crc kubenswrapper[4705]: I0124 07:43:08.575757 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:08 crc kubenswrapper[4705]: I0124 07:43:08.575775 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:08 crc kubenswrapper[4705]: E0124 07:43:08.575927 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:08 crc kubenswrapper[4705]: E0124 07:43:08.576047 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:08 crc kubenswrapper[4705]: E0124 07:43:08.576597 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:08 crc kubenswrapper[4705]: I0124 07:43:08.576687 4705 scope.go:117] "RemoveContainer" containerID="089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124" Jan 24 07:43:08 crc kubenswrapper[4705]: E0124 07:43:08.576840 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-js42b_openshift-ovn-kubernetes(3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" Jan 24 07:43:10 crc kubenswrapper[4705]: I0124 07:43:10.575323 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:10 crc kubenswrapper[4705]: I0124 07:43:10.575427 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:10 crc kubenswrapper[4705]: I0124 07:43:10.575346 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:10 crc kubenswrapper[4705]: E0124 07:43:10.575489 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:10 crc kubenswrapper[4705]: E0124 07:43:10.575568 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:10 crc kubenswrapper[4705]: E0124 07:43:10.575739 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:10 crc kubenswrapper[4705]: I0124 07:43:10.575799 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:10 crc kubenswrapper[4705]: E0124 07:43:10.575905 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:11 crc kubenswrapper[4705]: E0124 07:43:11.696196 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:43:12 crc kubenswrapper[4705]: I0124 07:43:12.574577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:12 crc kubenswrapper[4705]: I0124 07:43:12.574631 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:12 crc kubenswrapper[4705]: I0124 07:43:12.574631 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:12 crc kubenswrapper[4705]: I0124 07:43:12.574651 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:12 crc kubenswrapper[4705]: E0124 07:43:12.574716 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:12 crc kubenswrapper[4705]: E0124 07:43:12.574777 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:12 crc kubenswrapper[4705]: E0124 07:43:12.574853 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:12 crc kubenswrapper[4705]: E0124 07:43:12.574859 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:14 crc kubenswrapper[4705]: I0124 07:43:14.575509 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:14 crc kubenswrapper[4705]: I0124 07:43:14.575590 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:14 crc kubenswrapper[4705]: I0124 07:43:14.575623 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:14 crc kubenswrapper[4705]: I0124 07:43:14.575661 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:14 crc kubenswrapper[4705]: E0124 07:43:14.575682 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:14 crc kubenswrapper[4705]: E0124 07:43:14.575791 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:14 crc kubenswrapper[4705]: E0124 07:43:14.575940 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:14 crc kubenswrapper[4705]: E0124 07:43:14.576004 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:16 crc kubenswrapper[4705]: I0124 07:43:16.575062 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:16 crc kubenswrapper[4705]: I0124 07:43:16.575089 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:16 crc kubenswrapper[4705]: I0124 07:43:16.575150 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:16 crc kubenswrapper[4705]: E0124 07:43:16.575230 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:16 crc kubenswrapper[4705]: I0124 07:43:16.575278 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:16 crc kubenswrapper[4705]: E0124 07:43:16.575431 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:16 crc kubenswrapper[4705]: E0124 07:43:16.575504 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:16 crc kubenswrapper[4705]: E0124 07:43:16.575675 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:16 crc kubenswrapper[4705]: E0124 07:43:16.697941 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:43:18 crc kubenswrapper[4705]: I0124 07:43:18.574944 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:18 crc kubenswrapper[4705]: I0124 07:43:18.574998 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:18 crc kubenswrapper[4705]: E0124 07:43:18.575142 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:18 crc kubenswrapper[4705]: I0124 07:43:18.575031 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:18 crc kubenswrapper[4705]: I0124 07:43:18.575185 4705 scope.go:117] "RemoveContainer" containerID="5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8" Jan 24 07:43:18 crc kubenswrapper[4705]: I0124 07:43:18.575027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:18 crc kubenswrapper[4705]: E0124 07:43:18.575351 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:18 crc kubenswrapper[4705]: E0124 07:43:18.575479 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:18 crc kubenswrapper[4705]: E0124 07:43:18.575541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:18 crc kubenswrapper[4705]: I0124 07:43:18.935594 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/1.log" Jan 24 07:43:18 crc kubenswrapper[4705]: I0124 07:43:18.935934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerStarted","Data":"05df28d77c528db547c77fa6f2e2e34112f9d66c1b57c7bc41c7319cbd191449"} Jan 24 07:43:19 crc kubenswrapper[4705]: I0124 07:43:19.575995 4705 scope.go:117] "RemoveContainer" containerID="089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124" Jan 24 07:43:19 crc kubenswrapper[4705]: I0124 07:43:19.940226 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/3.log" Jan 24 07:43:19 crc kubenswrapper[4705]: I0124 07:43:19.942714 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerStarted","Data":"a301faf371be2148d30d37d8a5614e3aac18be4bddb9bc53afcbfe021b2c1372"} Jan 24 07:43:19 crc kubenswrapper[4705]: I0124 07:43:19.943875 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:43:19 crc kubenswrapper[4705]: I0124 07:43:19.980281 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podStartSLOduration=117.980263139 podStartE2EDuration="1m57.980263139s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:19.978811637 +0000 UTC m=+138.698684925" watchObservedRunningTime="2026-01-24 07:43:19.980263139 +0000 UTC m=+138.700136437" Jan 24 07:43:20 crc kubenswrapper[4705]: I0124 07:43:20.490439 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mxnng"] Jan 24 07:43:20 crc kubenswrapper[4705]: I0124 07:43:20.490577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:20 crc kubenswrapper[4705]: E0124 07:43:20.490679 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:20 crc kubenswrapper[4705]: I0124 07:43:20.574880 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:20 crc kubenswrapper[4705]: I0124 07:43:20.574934 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:20 crc kubenswrapper[4705]: E0124 07:43:20.575058 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:20 crc kubenswrapper[4705]: E0124 07:43:20.575100 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:20 crc kubenswrapper[4705]: I0124 07:43:20.575471 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:20 crc kubenswrapper[4705]: E0124 07:43:20.575598 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:21 crc kubenswrapper[4705]: I0124 07:43:21.574649 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:21 crc kubenswrapper[4705]: E0124 07:43:21.576520 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:21 crc kubenswrapper[4705]: E0124 07:43:21.698466 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:43:22 crc kubenswrapper[4705]: I0124 07:43:22.575179 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:22 crc kubenswrapper[4705]: I0124 07:43:22.575257 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:22 crc kubenswrapper[4705]: I0124 07:43:22.575307 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:22 crc kubenswrapper[4705]: E0124 07:43:22.575424 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:22 crc kubenswrapper[4705]: E0124 07:43:22.575689 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:22 crc kubenswrapper[4705]: E0124 07:43:22.575936 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:23 crc kubenswrapper[4705]: I0124 07:43:23.577748 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:23 crc kubenswrapper[4705]: E0124 07:43:23.577864 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:24 crc kubenswrapper[4705]: I0124 07:43:24.575416 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:24 crc kubenswrapper[4705]: I0124 07:43:24.575443 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:24 crc kubenswrapper[4705]: E0124 07:43:24.575886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:24 crc kubenswrapper[4705]: I0124 07:43:24.575443 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:24 crc kubenswrapper[4705]: E0124 07:43:24.575957 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:24 crc kubenswrapper[4705]: E0124 07:43:24.576041 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:25 crc kubenswrapper[4705]: I0124 07:43:25.575213 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:25 crc kubenswrapper[4705]: E0124 07:43:25.575373 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mxnng" podUID="aaa7a0f6-16ad-42c1-b1e2-6c080807fda1" Jan 24 07:43:26 crc kubenswrapper[4705]: I0124 07:43:26.574973 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:26 crc kubenswrapper[4705]: I0124 07:43:26.575030 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:26 crc kubenswrapper[4705]: I0124 07:43:26.575030 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:26 crc kubenswrapper[4705]: E0124 07:43:26.575156 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 07:43:26 crc kubenswrapper[4705]: E0124 07:43:26.575251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 07:43:26 crc kubenswrapper[4705]: E0124 07:43:26.575313 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 07:43:27 crc kubenswrapper[4705]: I0124 07:43:27.575056 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:27 crc kubenswrapper[4705]: I0124 07:43:27.578397 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 07:43:27 crc kubenswrapper[4705]: I0124 07:43:27.579300 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 07:43:28 crc kubenswrapper[4705]: I0124 07:43:28.575601 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:28 crc kubenswrapper[4705]: I0124 07:43:28.575673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:28 crc kubenswrapper[4705]: I0124 07:43:28.575617 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:28 crc kubenswrapper[4705]: I0124 07:43:28.577495 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 07:43:28 crc kubenswrapper[4705]: I0124 07:43:28.577593 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 07:43:28 crc kubenswrapper[4705]: I0124 07:43:28.579035 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 07:43:28 crc kubenswrapper[4705]: I0124 07:43:28.579283 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.702322 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.702454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.702476 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:30 crc kubenswrapper[4705]: E0124 07:43:30.702595 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:45:32.702565111 +0000 UTC m=+271.422438389 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.703359 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.708809 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.904047 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.904379 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.907908 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.908801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.990692 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 07:43:30 crc kubenswrapper[4705]: I0124 07:43:30.998612 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.005651 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.189853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.327361 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqzgc"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.327802 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.328157 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.328498 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.332790 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.332892 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gjggq"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.338079 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pg499"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.338485 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h4w4g"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.338784 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.339138 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bvnn7"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.339694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.342704 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.342847 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.343087 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.343105 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.373068 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.377841 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.378440 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.380417 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.380734 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.381768 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.388805 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.389055 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.389296 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.389302 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.389477 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.389625 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.389956 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.390202 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.392104 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqzgc"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.392250 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.396853 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.397077 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.397844 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412466 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-config\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412513 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1467a368-ffe2-4fd5-abca-e42018890e40-images\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-etcd-serving-ca\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412568 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhlm\" (UniqueName: \"kubernetes.io/projected/3a00e317-0d77-42ec-b31c-916797497da3-kube-api-access-9fhlm\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412591 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-audit\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-encryption-config\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412636 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69dbf7a5-1a4e-403b-8830-4e4b49305af5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412656 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-image-import-ca\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412678 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1467a368-ffe2-4fd5-abca-e42018890e40-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412702 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e7822fc-7419-4806-907b-b442a62f4baf-audit-dir\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412724 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4cj4\" (UniqueName: \"kubernetes.io/projected/02af14b8-f5ac-4ce9-a001-8389192957e1-kube-api-access-z4cj4\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412748 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a00e317-0d77-42ec-b31c-916797497da3-serving-cert\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412771 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfbb848d-6faf-4397-94fa-49b2a319c091-auth-proxy-config\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02af14b8-f5ac-4ce9-a001-8389192957e1-audit-dir\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412891 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-config\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.412921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413034 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413089 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lsqx\" (UniqueName: \"kubernetes.io/projected/69dbf7a5-1a4e-403b-8830-4e4b49305af5-kube-api-access-6lsqx\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413120 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-serving-cert\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413143 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-serving-cert\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413167 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68nd\" (UniqueName: \"kubernetes.io/projected/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-kube-api-access-x68nd\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69dbf7a5-1a4e-403b-8830-4e4b49305af5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413225 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5t26\" (UniqueName: \"kubernetes.io/projected/0e7822fc-7419-4806-907b-b442a62f4baf-kube-api-access-q5t26\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413253 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-serving-cert\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413279 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-client-ca\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crfnf\" (UniqueName: \"kubernetes.io/projected/a144ed50-2315-4874-a8f3-2f1f39111666-kube-api-access-crfnf\") pod \"dns-operator-744455d44c-gjggq\" (UID: \"a144ed50-2315-4874-a8f3-2f1f39111666\") " pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413333 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-audit-policies\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413380 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a144ed50-2315-4874-a8f3-2f1f39111666-metrics-tls\") pod \"dns-operator-744455d44c-gjggq\" (UID: \"a144ed50-2315-4874-a8f3-2f1f39111666\") " pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413413 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e7822fc-7419-4806-907b-b442a62f4baf-node-pullsecrets\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413433 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-etcd-client\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413456 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjj5t\" (UniqueName: \"kubernetes.io/projected/1467a368-ffe2-4fd5-abca-e42018890e40-kube-api-access-bjj5t\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413476 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-config\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-serving-cert\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413518 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwws\" (UniqueName: \"kubernetes.io/projected/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-kube-api-access-zkwws\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413539 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvn6g\" (UniqueName: \"kubernetes.io/projected/cfbb848d-6faf-4397-94fa-49b2a319c091-kube-api-access-vvn6g\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-etcd-client\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413584 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cfbb848d-6faf-4397-94fa-49b2a319c091-machine-approver-tls\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413612 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3a00e317-0d77-42ec-b31c-916797497da3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413631 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-client-ca\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413649 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-encryption-config\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413669 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1467a368-ffe2-4fd5-abca-e42018890e40-config\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413690 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb848d-6faf-4397-94fa-49b2a319c091-config\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.413711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.424092 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.424204 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.424282 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.424857 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.425846 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gjggq"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.426718 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pg499"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.432704 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.432882 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.433090 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.433295 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.433617 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.433729 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.433847 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.434039 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.434169 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.434282 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.435090 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.435234 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.435617 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.435965 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.435976 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.436131 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.436245 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.436294 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.436473 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.437381 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.443071 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bvnn7"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.445508 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.445783 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.446018 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.447768 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.450353 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.451033 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.451224 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.452151 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.452323 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.452440 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.452544 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.452646 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.452766 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.452800 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.453075 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.453135 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.453175 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.453228 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.453252 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.462655 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7czb5"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.467390 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.469246 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnmwt"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.469686 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.469792 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.469692 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-m7gzc"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.470426 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mz97d"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.470739 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.471807 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.477616 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.478365 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.478650 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rfsmt"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.478977 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.480288 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.480656 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.487145 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.487359 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.487651 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.487794 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.487955 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.488066 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.488140 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.488217 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.488352 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.488433 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.488433 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.492089 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t774k"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.492676 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.493039 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.493091 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.493396 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.493587 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.496056 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.496633 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.497150 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d54hp"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.508572 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.512328 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h8gjk"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.513707 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.515323 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.516598 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.519416 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.520606 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.521186 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.521636 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.521848 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f220961-3dce-41b1-8ec4-26dece52318b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.521932 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3a00e317-0d77-42ec-b31c-916797497da3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.521975 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-client-ca\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.522007 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-encryption-config\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.522325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e67495-3fae-45d0-a5d5-4741c0a763c9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.522376 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.522413 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1467a368-ffe2-4fd5-abca-e42018890e40-config\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.522524 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.523693 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.523874 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.522449 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.529320 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-n7xmf"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.532146 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3a00e317-0d77-42ec-b31c-916797497da3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.532564 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.532946 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533292 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533345 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533543 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533653 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533728 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533775 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533798 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.533871 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534337 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534382 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-metrics-certs\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534756 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb848d-6faf-4397-94fa-49b2a319c091-config\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534780 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534797 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534815 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6e67495-3fae-45d0-a5d5-4741c0a763c9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.534848 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7bz\" (UniqueName: \"kubernetes.io/projected/c30fd97b-0555-479c-969f-4148e7bfb66d-kube-api-access-gn7bz\") pod \"downloads-7954f5f757-7czb5\" (UID: \"c30fd97b-0555-479c-969f-4148e7bfb66d\") " pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535041 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535082 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkj9z\" (UniqueName: \"kubernetes.io/projected/a5317443-5085-4cd9-b3fb-6b8282746932-kube-api-access-hkj9z\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535105 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-config\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1467a368-ffe2-4fd5-abca-e42018890e40-images\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-etcd-serving-ca\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-audit-policies\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535235 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fhlm\" (UniqueName: \"kubernetes.io/projected/3a00e317-0d77-42ec-b31c-916797497da3-kube-api-access-9fhlm\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535317 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-audit\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535343 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-encryption-config\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535363 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsfp\" (UniqueName: \"kubernetes.io/projected/9f220961-3dce-41b1-8ec4-26dece52318b-kube-api-access-dmsfp\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535381 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69dbf7a5-1a4e-403b-8830-4e4b49305af5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535398 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-image-import-ca\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535415 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-ca\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535431 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5317443-5085-4cd9-b3fb-6b8282746932-service-ca-bundle\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535447 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-stats-auth\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535465 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4cj4\" (UniqueName: \"kubernetes.io/projected/02af14b8-f5ac-4ce9-a001-8389192957e1-kube-api-access-z4cj4\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1467a368-ffe2-4fd5-abca-e42018890e40-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535504 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e7822fc-7419-4806-907b-b442a62f4baf-audit-dir\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a00e317-0d77-42ec-b31c-916797497da3-serving-cert\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535538 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfbb848d-6faf-4397-94fa-49b2a319c091-auth-proxy-config\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535553 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-config\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535586 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02af14b8-f5ac-4ce9-a001-8389192957e1-audit-dir\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535607 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lsqx\" (UniqueName: \"kubernetes.io/projected/69dbf7a5-1a4e-403b-8830-4e4b49305af5-kube-api-access-6lsqx\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535643 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w88cj\" (UniqueName: \"kubernetes.io/projected/23a30e75-f6c5-417a-8931-2467fa9615a8-kube-api-access-w88cj\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535672 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535690 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2r7l\" (UniqueName: \"kubernetes.io/projected/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-kube-api-access-k2r7l\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535705 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535722 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-serving-cert\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-serving-cert\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535783 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm45f\" (UniqueName: \"kubernetes.io/projected/90ec9237-0f8d-4641-8e07-7fb662297324-kube-api-access-zm45f\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535802 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68nd\" (UniqueName: \"kubernetes.io/projected/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-kube-api-access-x68nd\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535829 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69dbf7a5-1a4e-403b-8830-4e4b49305af5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535895 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5t26\" (UniqueName: \"kubernetes.io/projected/0e7822fc-7419-4806-907b-b442a62f4baf-kube-api-access-q5t26\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535914 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-serving-cert\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535931 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-service-ca\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-client\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.535967 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-client-ca\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536003 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536025 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crfnf\" (UniqueName: \"kubernetes.io/projected/a144ed50-2315-4874-a8f3-2f1f39111666-kube-api-access-crfnf\") pod \"dns-operator-744455d44c-gjggq\" (UID: \"a144ed50-2315-4874-a8f3-2f1f39111666\") " pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536041 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536064 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f220961-3dce-41b1-8ec4-26dece52318b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-audit-policies\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536097 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536114 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f220961-3dce-41b1-8ec4-26dece52318b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a144ed50-2315-4874-a8f3-2f1f39111666-metrics-tls\") pod \"dns-operator-744455d44c-gjggq\" (UID: \"a144ed50-2315-4874-a8f3-2f1f39111666\") " pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e7822fc-7419-4806-907b-b442a62f4baf-node-pullsecrets\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536374 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-etcd-client\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536393 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e67495-3fae-45d0-a5d5-4741c0a763c9-config\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536412 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjj5t\" (UniqueName: \"kubernetes.io/projected/1467a368-ffe2-4fd5-abca-e42018890e40-kube-api-access-bjj5t\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536430 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-config\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536447 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536464 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a30e75-f6c5-417a-8931-2467fa9615a8-serving-cert\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536486 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-serving-cert\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536505 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwws\" (UniqueName: \"kubernetes.io/projected/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-kube-api-access-zkwws\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90ec9237-0f8d-4641-8e07-7fb662297324-audit-dir\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536541 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvn6g\" (UniqueName: \"kubernetes.io/projected/cfbb848d-6faf-4397-94fa-49b2a319c091-kube-api-access-vvn6g\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536557 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-config\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536574 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-etcd-client\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536591 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cfbb848d-6faf-4397-94fa-49b2a319c091-machine-approver-tls\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-default-certificate\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.536629 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.537168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb848d-6faf-4397-94fa-49b2a319c091-config\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.537788 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-client-ca\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.538852 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e7822fc-7419-4806-907b-b442a62f4baf-node-pullsecrets\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.539322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-config\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.539753 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-etcd-serving-ca\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.539958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-client-ca\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.540078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-audit\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.540086 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1467a368-ffe2-4fd5-abca-e42018890e40-config\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.540402 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-image-import-ca\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.540679 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e7822fc-7419-4806-907b-b442a62f4baf-audit-dir\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.541507 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69dbf7a5-1a4e-403b-8830-4e4b49305af5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.541607 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.541736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-audit-policies\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.541862 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7822fc-7419-4806-907b-b442a62f4baf-config\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.544635 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1467a368-ffe2-4fd5-abca-e42018890e40-images\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.545999 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1467a368-ffe2-4fd5-abca-e42018890e40-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.546068 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02af14b8-f5ac-4ce9-a001-8389192957e1-audit-dir\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.546225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.546581 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfbb848d-6faf-4397-94fa-49b2a319c091-auth-proxy-config\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.546947 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a00e317-0d77-42ec-b31c-916797497da3-serving-cert\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.546995 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-etcd-client\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.547398 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02af14b8-f5ac-4ce9-a001-8389192957e1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.547940 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.549300 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-encryption-config\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.549949 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-config\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.550415 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.553346 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-serving-cert\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554204 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554286 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554291 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554481 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554571 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554625 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554671 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554759 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.554867 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.555337 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.555445 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.556195 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.557028 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.557054 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m5twk"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.557657 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.558187 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.558426 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.560293 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-58kmq"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.560782 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.565852 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.566935 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-etcd-client\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.567144 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-encryption-config\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.567657 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cfbb848d-6faf-4397-94fa-49b2a319c091-machine-approver-tls\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.567853 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.568138 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pc548"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.568417 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.568426 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.570442 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.570951 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.572000 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.572541 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.572985 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.573283 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.573304 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.573778 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:31 crc kubenswrapper[4705]: W0124 07:43:31.573929 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-57bd8596897fbe08e1c5ddaff367ea37a8b3ee46606790da8c64e9234707f010 WatchSource:0}: Error finding container 57bd8596897fbe08e1c5ddaff367ea37a8b3ee46606790da8c64e9234707f010: Status 404 returned error can't find the container with id 57bd8596897fbe08e1c5ddaff367ea37a8b3ee46606790da8c64e9234707f010 Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.574964 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.575280 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.575721 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.576699 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.583453 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69dbf7a5-1a4e-403b-8830-4e4b49305af5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.583762 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02af14b8-f5ac-4ce9-a001-8389192957e1-serving-cert\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.584738 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.585072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7822fc-7419-4806-907b-b442a62f4baf-serving-cert\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.586020 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-serving-cert\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.590748 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.592051 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.592191 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a144ed50-2315-4874-a8f3-2f1f39111666-metrics-tls\") pod \"dns-operator-744455d44c-gjggq\" (UID: \"a144ed50-2315-4874-a8f3-2f1f39111666\") " pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.595258 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.597083 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.597341 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.598224 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.598352 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.599403 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gbdm8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.599529 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.602300 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zfr4t"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.602491 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.604544 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.604612 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.604782 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7czb5"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.606433 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mz97d"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.607709 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnmwt"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.609317 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.610551 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.611213 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.611678 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h4w4g"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.612773 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n7xmf"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.613786 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h8gjk"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.614785 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.615907 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.616943 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.618011 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.619689 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-m7gzc"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.620993 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.621975 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d54hp"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.622977 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t774k"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.623971 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m5twk"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.626213 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pc548"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.627229 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.628309 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bnwdd"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.629138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.629999 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kqb54"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.630556 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.630810 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.631398 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.632534 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.633833 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.635004 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.636073 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.637134 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-58kmq"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.637786 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkj9z\" (UniqueName: \"kubernetes.io/projected/a5317443-5085-4cd9-b3fb-6b8282746932-kube-api-access-hkj9z\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.637866 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-audit-policies\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.637901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.637925 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-ca\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.637968 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5317443-5085-4cd9-b3fb-6b8282746932-service-ca-bundle\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.637996 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-stats-auth\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638079 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmsfp\" (UniqueName: \"kubernetes.io/projected/9f220961-3dce-41b1-8ec4-26dece52318b-kube-api-access-dmsfp\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638118 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638461 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w88cj\" (UniqueName: \"kubernetes.io/projected/23a30e75-f6c5-417a-8931-2467fa9615a8-kube-api-access-w88cj\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638502 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2r7l\" (UniqueName: \"kubernetes.io/projected/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-kube-api-access-k2r7l\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638526 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638580 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm45f\" (UniqueName: \"kubernetes.io/projected/90ec9237-0f8d-4641-8e07-7fb662297324-kube-api-access-zm45f\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638611 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638625 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-service-ca\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638646 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-client\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638694 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638748 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f220961-3dce-41b1-8ec4-26dece52318b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638775 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f220961-3dce-41b1-8ec4-26dece52318b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.638796 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639687 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e67495-3fae-45d0-a5d5-4741c0a763c9-config\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639724 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639755 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a30e75-f6c5-417a-8931-2467fa9615a8-serving-cert\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639788 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-config\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639809 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90ec9237-0f8d-4641-8e07-7fb662297324-audit-dir\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639924 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-default-certificate\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e67495-3fae-45d0-a5d5-4741c0a763c9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639998 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f220961-3dce-41b1-8ec4-26dece52318b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640046 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-metrics-certs\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640092 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6e67495-3fae-45d0-a5d5-4741c0a763c9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640133 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn7bz\" (UniqueName: \"kubernetes.io/projected/c30fd97b-0555-479c-969f-4148e7bfb66d-kube-api-access-gn7bz\") pod \"downloads-7954f5f757-7czb5\" (UID: \"c30fd97b-0555-479c-969f-4148e7bfb66d\") " pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640152 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640157 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5317443-5085-4cd9-b3fb-6b8282746932-service-ca-bundle\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.640695 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.641615 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.639582 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-audit-policies\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.641788 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.642324 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.642419 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e67495-3fae-45d0-a5d5-4741c0a763c9-config\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.642703 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f220961-3dce-41b1-8ec4-26dece52318b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.643540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.643654 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.643796 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90ec9237-0f8d-4641-8e07-7fb662297324-audit-dir\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.644274 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.644649 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gbdm8"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.645000 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.646158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.646352 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.646617 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.647776 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.648957 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-default-certificate\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.649333 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.649963 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.650876 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.650905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6e67495-3fae-45d0-a5d5-4741c0a763c9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.650984 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.650994 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-metrics-certs\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.652539 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bnwdd"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.654309 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5317443-5085-4cd9-b3fb-6b8282746932-stats-auth\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.656796 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zfr4t"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.658554 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kqb54"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.659720 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-s85hw"] Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.661439 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.670790 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.690312 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.710262 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.714477 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f220961-3dce-41b1-8ec4-26dece52318b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.730535 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.752575 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.779299 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.790463 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.798239 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a30e75-f6c5-417a-8931-2467fa9615a8-serving-cert\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.810796 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.822870 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-client\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.829865 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.834095 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-config\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.850377 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.859256 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-ca\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.870660 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.879554 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23a30e75-f6c5-417a-8931-2467fa9615a8-etcd-service-ca\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.891138 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.930498 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.950718 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.970675 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.982254 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"97388f9494c2d48e4a3c0856b69141b57dd1220ddee798997b8233212fc42d5c"} Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.982320 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"57bd8596897fbe08e1c5ddaff367ea37a8b3ee46606790da8c64e9234707f010"} Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.982497 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.983643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"343c8c2f95a52df58cb652b371038bded5cf7ed0b84972ba13dde40ea96a9a46"} Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.983671 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f8638e20d8078091d5507fd291e08bb3732773dfc2c014d3e08f8860fa0b3553"} Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.984840 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e04f15dcdf058d5c820d3eb3edfe957ccf944beb30d8fce5e340d28bce621079"} Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.984888 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"24d71d0d4bd1d44f8adc8c069c31869dca57ef5d6b923ae94d2ef2276f798a97"} Jan 24 07:43:31 crc kubenswrapper[4705]: I0124 07:43:31.996584 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.011000 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.030300 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.050904 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.070573 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.091044 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.110908 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.144017 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68nd\" (UniqueName: \"kubernetes.io/projected/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-kube-api-access-x68nd\") pod \"controller-manager-879f6c89f-h4w4g\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.164867 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fhlm\" (UniqueName: \"kubernetes.io/projected/3a00e317-0d77-42ec-b31c-916797497da3-kube-api-access-9fhlm\") pod \"openshift-config-operator-7777fb866f-pg499\" (UID: \"3a00e317-0d77-42ec-b31c-916797497da3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.184901 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5t26\" (UniqueName: \"kubernetes.io/projected/0e7822fc-7419-4806-907b-b442a62f4baf-kube-api-access-q5t26\") pod \"apiserver-76f77b778f-bvnn7\" (UID: \"0e7822fc-7419-4806-907b-b442a62f4baf\") " pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.208266 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crfnf\" (UniqueName: \"kubernetes.io/projected/a144ed50-2315-4874-a8f3-2f1f39111666-kube-api-access-crfnf\") pod \"dns-operator-744455d44c-gjggq\" (UID: \"a144ed50-2315-4874-a8f3-2f1f39111666\") " pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.244517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjj5t\" (UniqueName: \"kubernetes.io/projected/1467a368-ffe2-4fd5-abca-e42018890e40-kube-api-access-bjj5t\") pod \"machine-api-operator-5694c8668f-hqzgc\" (UID: \"1467a368-ffe2-4fd5-abca-e42018890e40\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.270628 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.295700 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4cj4\" (UniqueName: \"kubernetes.io/projected/02af14b8-f5ac-4ce9-a001-8389192957e1-kube-api-access-z4cj4\") pod \"apiserver-7bbb656c7d-c8vf8\" (UID: \"02af14b8-f5ac-4ce9-a001-8389192957e1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.295933 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwws\" (UniqueName: \"kubernetes.io/projected/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-kube-api-access-zkwws\") pod \"route-controller-manager-6576b87f9c-27hpm\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.304464 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.309478 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lsqx\" (UniqueName: \"kubernetes.io/projected/69dbf7a5-1a4e-403b-8830-4e4b49305af5-kube-api-access-6lsqx\") pod \"openshift-apiserver-operator-796bbdcf4f-hwqmb\" (UID: \"69dbf7a5-1a4e-403b-8830-4e4b49305af5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.328131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvn6g\" (UniqueName: \"kubernetes.io/projected/cfbb848d-6faf-4397-94fa-49b2a319c091-kube-api-access-vvn6g\") pod \"machine-approver-56656f9798-v85qc\" (UID: \"cfbb848d-6faf-4397-94fa-49b2a319c091\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.329963 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.331238 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.346811 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.425525 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.425682 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.426172 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.428681 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.429740 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.437379 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.448158 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 07:43:32 crc kubenswrapper[4705]: W0124 07:43:32.450503 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfbb848d_6faf_4397_94fa_49b2a319c091.slice/crio-675f5a964feed81e0677bb4c6d61a35ccd528ae686e4f37540076b7be85e665d WatchSource:0}: Error finding container 675f5a964feed81e0677bb4c6d61a35ccd528ae686e4f37540076b7be85e665d: Status 404 returned error can't find the container with id 675f5a964feed81e0677bb4c6d61a35ccd528ae686e4f37540076b7be85e665d Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.450991 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.451845 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.506116 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.506638 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.510479 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.531999 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.550891 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.565443 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.569257 4705 request.go:700] Waited for 1.011828712s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.572187 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.633491 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.634066 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.635240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.635479 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.695310 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.695676 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.695733 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.710607 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.740589 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.759285 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.778919 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.856848 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.857417 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.858458 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.863475 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.870629 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.890989 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.911629 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.931518 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.951918 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 07:43:32 crc kubenswrapper[4705]: I0124 07:43:32.970535 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.101136 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.101502 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.101891 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.102045 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.102191 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.204491 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.204811 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.205059 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.205180 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.205242 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.208961 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.214756 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" event={"ID":"cfbb848d-6faf-4397-94fa-49b2a319c091","Type":"ContainerStarted","Data":"675f5a964feed81e0677bb4c6d61a35ccd528ae686e4f37540076b7be85e665d"} Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.214961 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.351922 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.352228 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.352369 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.352836 4705 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.354554 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.357566 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.363066 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.377130 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.484202 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.484438 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.484531 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.487217 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.488005 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.504787 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.512064 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.532524 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.551038 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.570673 4705 request.go:700] Waited for 1.932677906s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.572062 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqzgc"] Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.601559 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkj9z\" (UniqueName: \"kubernetes.io/projected/a5317443-5085-4cd9-b3fb-6b8282746932-kube-api-access-hkj9z\") pod \"router-default-5444994796-rfsmt\" (UID: \"a5317443-5085-4cd9-b3fb-6b8282746932\") " pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.606211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmsfp\" (UniqueName: \"kubernetes.io/projected/9f220961-3dce-41b1-8ec4-26dece52318b-kube-api-access-dmsfp\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.628750 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w88cj\" (UniqueName: \"kubernetes.io/projected/23a30e75-f6c5-417a-8931-2467fa9615a8-kube-api-access-w88cj\") pod \"etcd-operator-b45778765-d54hp\" (UID: \"23a30e75-f6c5-417a-8931-2467fa9615a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.646624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm45f\" (UniqueName: \"kubernetes.io/projected/90ec9237-0f8d-4641-8e07-7fb662297324-kube-api-access-zm45f\") pod \"oauth-openshift-558db77b4-rnmwt\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.649123 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.672011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2r7l\" (UniqueName: \"kubernetes.io/projected/592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc-kube-api-access-k2r7l\") pod \"openshift-controller-manager-operator-756b6f6bc6-h2p8m\" (UID: \"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.692021 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e67495-3fae-45d0-a5d5-4741c0a763c9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k5v88\" (UID: \"b6e67495-3fae-45d0-a5d5-4741c0a763c9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.707099 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f220961-3dce-41b1-8ec4-26dece52318b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lfjc8\" (UID: \"9f220961-3dce-41b1-8ec4-26dece52318b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.790228 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.790459 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.790734 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.791446 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.806157 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn7bz\" (UniqueName: \"kubernetes.io/projected/c30fd97b-0555-479c-969f-4148e7bfb66d-kube-api-access-gn7bz\") pod \"downloads-7954f5f757-7czb5\" (UID: \"c30fd97b-0555-479c-969f-4148e7bfb66d\") " pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.827356 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h4w4g"] Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgpnn\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-kube-api-access-zgpnn\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866476 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/544a6521-1632-42e7-aedc-c26453958c18-serving-cert\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866515 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-certificates\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866554 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-service-ca-bundle\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e4e30be1-989b-4a5d-a33c-79c00184ce75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a6e017-145f-4005-85b2-3f027185ed6c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-trusted-ca\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866626 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq8fr\" (UniqueName: \"kubernetes.io/projected/6887bb61-9f22-4386-b263-866334b6529e-kube-api-access-hq8fr\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866651 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6887bb61-9f22-4386-b263-866334b6529e-proxy-tls\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df8e8ead-4e67-4775-bce3-b48236e30573-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866687 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-bound-sa-token\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8e8ead-4e67-4775-bce3-b48236e30573-config\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6887bb61-9f22-4386-b263-866334b6529e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866765 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e4e30be1-989b-4a5d-a33c-79c00184ce75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkfkr\" (UniqueName: \"kubernetes.io/projected/544a6521-1632-42e7-aedc-c26453958c18-kube-api-access-fkfkr\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866806 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df8e8ead-4e67-4775-bce3-b48236e30573-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866836 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a6e017-145f-4005-85b2-3f027185ed6c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866861 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-config\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866878 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a6e017-145f-4005-85b2-3f027185ed6c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-tls\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.866929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: E0124 07:43:33.868658 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:34.368641252 +0000 UTC m=+153.088514650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.876845 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb"] Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.882653 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.888660 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pg499"] Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.891790 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.897519 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.926948 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8"] Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.942808 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968600 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968850 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df8e8ead-4e67-4775-bce3-b48236e30573-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968885 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxn2\" (UniqueName: \"kubernetes.io/projected/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-kube-api-access-cjxn2\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968906 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkfkr\" (UniqueName: \"kubernetes.io/projected/544a6521-1632-42e7-aedc-c26453958c18-kube-api-access-fkfkr\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968935 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-config\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968959 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a6e017-145f-4005-85b2-3f027185ed6c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.968979 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-proxy-tls\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969011 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-socket-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969034 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-tls\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969081 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgzd\" (UniqueName: \"kubernetes.io/projected/a84e98fc-8911-4fe1-8242-e906ccfdb277-kube-api-access-wdgzd\") pod \"control-plane-machine-set-operator-78cbb6b69f-7cvlf\" (UID: \"a84e98fc-8911-4fe1-8242-e906ccfdb277\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969138 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4tj\" (UniqueName: \"kubernetes.io/projected/ab19a35e-a9f2-44e8-9e8c-40339f0a9195-kube-api-access-lc4tj\") pod \"ingress-canary-kqb54\" (UID: \"ab19a35e-a9f2-44e8-9e8c-40339f0a9195\") " pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969176 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xc9ml\" (UID: \"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969199 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrgql\" (UniqueName: \"kubernetes.io/projected/c573a3c6-adea-4c48-a71b-4644822c9caa-kube-api-access-mrgql\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c573a3c6-adea-4c48-a71b-4644822c9caa-signing-key\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969268 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/544a6521-1632-42e7-aedc-c26453958c18-serving-cert\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-certificates\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969349 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969371 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03eae766-055e-4339-a21d-f594802d636c-profile-collector-cert\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969421 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mbg8\" (UniqueName: \"kubernetes.io/projected/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-kube-api-access-7mbg8\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969442 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da4379ba-31cd-436c-b1a6-8e715c0d2dca-secret-volume\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969477 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcnhk\" (UniqueName: \"kubernetes.io/projected/3bb788e4-fad9-4416-9042-7a46d8ef83b3-kube-api-access-qcnhk\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969513 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq8fr\" (UniqueName: \"kubernetes.io/projected/6887bb61-9f22-4386-b263-866334b6529e-kube-api-access-hq8fr\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969570 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-certs\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/100fdbf1-ca31-4a06-9f27-c3be6e08e887-config\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969658 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6990313f-0f9b-4a82-b072-4f094768a28e-trusted-ca\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969686 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-node-bootstrap-token\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969708 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-csi-data-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969730 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77ct\" (UniqueName: \"kubernetes.io/projected/da4379ba-31cd-436c-b1a6-8e715c0d2dca-kube-api-access-d77ct\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969753 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969775 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-images\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969798 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cfa9c40-ad55-4bb3-b7ba-4325816a760d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2dg4s\" (UID: \"4cfa9c40-ad55-4bb3-b7ba-4325816a760d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-bound-sa-token\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969877 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6887bb61-9f22-4386-b263-866334b6529e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/100fdbf1-ca31-4a06-9f27-c3be6e08e887-serving-cert\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969920 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgrff\" (UniqueName: \"kubernetes.io/projected/03eae766-055e-4339-a21d-f594802d636c-kube-api-access-jgrff\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969952 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/483649f3-68d0-467c-b4ff-dfbb2b3c340a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.969985 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh49d\" (UniqueName: \"kubernetes.io/projected/f0639041-89a4-4d66-9670-e360fb45626d-kube-api-access-hh49d\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970008 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-apiservice-cert\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970076 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvh55\" (UniqueName: \"kubernetes.io/projected/7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c-kube-api-access-qvh55\") pod \"package-server-manager-789f6589d5-xc9ml\" (UID: \"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970098 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49r8j\" (UniqueName: \"kubernetes.io/projected/100fdbf1-ca31-4a06-9f27-c3be6e08e887-kube-api-access-49r8j\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970118 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-oauth-serving-cert\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970139 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-plugins-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970158 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970194 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a6e017-145f-4005-85b2-3f027185ed6c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/483649f3-68d0-467c-b4ff-dfbb2b3c340a-srv-cert\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq78w\" (UniqueName: \"kubernetes.io/projected/6990313f-0f9b-4a82-b072-4f094768a28e-kube-api-access-cq78w\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970290 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-mountpoint-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970353 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dfb189df-fd59-4ce7-9cf9-56966dab7850-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m5twk\" (UID: \"dfb189df-fd59-4ce7-9cf9-56966dab7850\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970373 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab19a35e-a9f2-44e8-9e8c-40339f0a9195-cert\") pod \"ingress-canary-kqb54\" (UID: \"ab19a35e-a9f2-44e8-9e8c-40339f0a9195\") " pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970406 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0639041-89a4-4d66-9670-e360fb45626d-serving-cert\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgpnn\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-kube-api-access-zgpnn\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970456 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-metrics-tls\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970478 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlzh6\" (UniqueName: \"kubernetes.io/projected/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-kube-api-access-nlzh6\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6990313f-0f9b-4a82-b072-4f094768a28e-metrics-tls\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970519 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94xh\" (UniqueName: \"kubernetes.io/projected/dfb189df-fd59-4ce7-9cf9-56966dab7850-kube-api-access-b94xh\") pod \"multus-admission-controller-857f4d67dd-m5twk\" (UID: \"dfb189df-fd59-4ce7-9cf9-56966dab7850\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970540 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr9g7\" (UniqueName: \"kubernetes.io/projected/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-kube-api-access-pr9g7\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970568 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970588 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8hv\" (UniqueName: \"kubernetes.io/projected/4cfa9c40-ad55-4bb3-b7ba-4325816a760d-kube-api-access-tv8hv\") pod \"cluster-samples-operator-665b6dd947-2dg4s\" (UID: \"4cfa9c40-ad55-4bb3-b7ba-4325816a760d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970644 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-tmpfs\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970666 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c573a3c6-adea-4c48-a71b-4644822c9caa-signing-cabundle\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970703 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-service-ca-bundle\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970728 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqxz\" (UniqueName: \"kubernetes.io/projected/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-kube-api-access-dgqxz\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970748 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-registration-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-serving-cert\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970846 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a6e017-145f-4005-85b2-3f027185ed6c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e4e30be1-989b-4a5d-a33c-79c00184ce75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970902 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-trusted-ca\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970923 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-config\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03eae766-055e-4339-a21d-f594802d636c-srv-cert\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.970968 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0639041-89a4-4d66-9670-e360fb45626d-config\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971051 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwlwh\" (UniqueName: \"kubernetes.io/projected/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-kube-api-access-cwlwh\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971076 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6887bb61-9f22-4386-b263-866334b6529e-proxy-tls\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971121 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-service-ca\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971157 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxg6m\" (UniqueName: \"kubernetes.io/projected/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-kube-api-access-rxg6m\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df8e8ead-4e67-4775-bce3-b48236e30573-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971212 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-oauth-config\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a84e98fc-8911-4fe1-8242-e906ccfdb277-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7cvlf\" (UID: \"a84e98fc-8911-4fe1-8242-e906ccfdb277\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8e8ead-4e67-4775-bce3-b48236e30573-config\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-trusted-ca-bundle\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971348 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-config-volume\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971372 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e4e30be1-989b-4a5d-a33c-79c00184ce75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971395 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4sf\" (UniqueName: \"kubernetes.io/projected/483649f3-68d0-467c-b4ff-dfbb2b3c340a-kube-api-access-5s4sf\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971417 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2bpd\" (UniqueName: \"kubernetes.io/projected/25819472-0561-4e4c-ad89-abfe02bc8484-kube-api-access-g2bpd\") pod \"migrator-59844c95c7-bjvr8\" (UID: \"25819472-0561-4e4c-ad89-abfe02bc8484\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971449 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-webhook-cert\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971485 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6990313f-0f9b-4a82-b072-4f094768a28e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da4379ba-31cd-436c-b1a6-8e715c0d2dca-config-volume\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.971540 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/100fdbf1-ca31-4a06-9f27-c3be6e08e887-trusted-ca\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:33 crc kubenswrapper[4705]: E0124 07:43:33.973306 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:34.473288832 +0000 UTC m=+153.193162120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.974407 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-config\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.978175 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8e8ead-4e67-4775-bce3-b48236e30573-config\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.979068 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a6e017-145f-4005-85b2-3f027185ed6c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.980214 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-service-ca-bundle\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.984238 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df8e8ead-4e67-4775-bce3-b48236e30573-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.984952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544a6521-1632-42e7-aedc-c26453958c18-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.987599 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gjggq"] Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.987848 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6887bb61-9f22-4386-b263-866334b6529e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.987870 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6887bb61-9f22-4386-b263-866334b6529e-proxy-tls\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.990128 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bvnn7"] Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.992801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/544a6521-1632-42e7-aedc-c26453958c18-serving-cert\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:33 crc kubenswrapper[4705]: I0124 07:43:33.996074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a6e017-145f-4005-85b2-3f027185ed6c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.002968 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-tls\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.003189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e4e30be1-989b-4a5d-a33c-79c00184ce75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.003406 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e4e30be1-989b-4a5d-a33c-79c00184ce75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.005209 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-certificates\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.011853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-trusted-ca\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.011957 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df8e8ead-4e67-4775-bce3-b48236e30573-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2s5cd\" (UID: \"df8e8ead-4e67-4775-bce3-b48236e30573\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.013697 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm"] Jan 24 07:43:34 crc kubenswrapper[4705]: W0124 07:43:34.045238 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69dbf7a5_1a4e_403b_8830_4e4b49305af5.slice/crio-9d08c051a5b4d4cb07b12c760dc3e55238aeb47705a29145731bd20bd5dc2e6e WatchSource:0}: Error finding container 9d08c051a5b4d4cb07b12c760dc3e55238aeb47705a29145731bd20bd5dc2e6e: Status 404 returned error can't find the container with id 9d08c051a5b4d4cb07b12c760dc3e55238aeb47705a29145731bd20bd5dc2e6e Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.046569 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkfkr\" (UniqueName: \"kubernetes.io/projected/544a6521-1632-42e7-aedc-c26453958c18-kube-api-access-fkfkr\") pod \"authentication-operator-69f744f599-mz97d\" (UID: \"544a6521-1632-42e7-aedc-c26453958c18\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.070973 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.072693 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-images\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.072752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cfa9c40-ad55-4bb3-b7ba-4325816a760d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2dg4s\" (UID: \"4cfa9c40-ad55-4bb3-b7ba-4325816a760d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.072790 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/100fdbf1-ca31-4a06-9f27-c3be6e08e887-serving-cert\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.072923 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgrff\" (UniqueName: \"kubernetes.io/projected/03eae766-055e-4339-a21d-f594802d636c-kube-api-access-jgrff\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.072981 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/483649f3-68d0-467c-b4ff-dfbb2b3c340a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073010 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh49d\" (UniqueName: \"kubernetes.io/projected/f0639041-89a4-4d66-9670-e360fb45626d-kube-api-access-hh49d\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073036 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-apiservice-cert\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073071 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvh55\" (UniqueName: \"kubernetes.io/projected/7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c-kube-api-access-qvh55\") pod \"package-server-manager-789f6589d5-xc9ml\" (UID: \"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073129 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49r8j\" (UniqueName: \"kubernetes.io/projected/100fdbf1-ca31-4a06-9f27-c3be6e08e887-kube-api-access-49r8j\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-oauth-serving-cert\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073167 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a6e017-145f-4005-85b2-3f027185ed6c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4c8z4\" (UID: \"84a6e017-145f-4005-85b2-3f027185ed6c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073196 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-plugins-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073297 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/483649f3-68d0-467c-b4ff-dfbb2b3c340a-srv-cert\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073321 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-mountpoint-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073343 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq78w\" (UniqueName: \"kubernetes.io/projected/6990313f-0f9b-4a82-b072-4f094768a28e-kube-api-access-cq78w\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073371 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dfb189df-fd59-4ce7-9cf9-56966dab7850-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m5twk\" (UID: \"dfb189df-fd59-4ce7-9cf9-56966dab7850\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073401 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab19a35e-a9f2-44e8-9e8c-40339f0a9195-cert\") pod \"ingress-canary-kqb54\" (UID: \"ab19a35e-a9f2-44e8-9e8c-40339f0a9195\") " pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073436 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0639041-89a4-4d66-9670-e360fb45626d-serving-cert\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073481 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-plugins-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.074857 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-images\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.076607 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-metrics-tls\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.080487 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq8fr\" (UniqueName: \"kubernetes.io/projected/6887bb61-9f22-4386-b263-866334b6529e-kube-api-access-hq8fr\") pod \"machine-config-controller-84d6567774-t774k\" (UID: \"6887bb61-9f22-4386-b263-866334b6529e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.081152 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgpnn\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-kube-api-access-zgpnn\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.081799 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/100fdbf1-ca31-4a06-9f27-c3be6e08e887-serving-cert\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.081939 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-mountpoint-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.082783 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-oauth-serving-cert\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.073485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-metrics-tls\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.082924 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlzh6\" (UniqueName: \"kubernetes.io/projected/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-kube-api-access-nlzh6\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.082962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94xh\" (UniqueName: \"kubernetes.io/projected/dfb189df-fd59-4ce7-9cf9-56966dab7850-kube-api-access-b94xh\") pod \"multus-admission-controller-857f4d67dd-m5twk\" (UID: \"dfb189df-fd59-4ce7-9cf9-56966dab7850\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.082993 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6990313f-0f9b-4a82-b072-4f094768a28e-metrics-tls\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.083022 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr9g7\" (UniqueName: \"kubernetes.io/projected/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-kube-api-access-pr9g7\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.083048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.083098 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv8hv\" (UniqueName: \"kubernetes.io/projected/4cfa9c40-ad55-4bb3-b7ba-4325816a760d-kube-api-access-tv8hv\") pod \"cluster-samples-operator-665b6dd947-2dg4s\" (UID: \"4cfa9c40-ad55-4bb3-b7ba-4325816a760d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.083130 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-tmpfs\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.083164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c573a3c6-adea-4c48-a71b-4644822c9caa-signing-cabundle\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.084301 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.085202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cfa9c40-ad55-4bb3-b7ba-4325816a760d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2dg4s\" (UID: \"4cfa9c40-ad55-4bb3-b7ba-4325816a760d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.085957 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0639041-89a4-4d66-9670-e360fb45626d-serving-cert\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.087480 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-tmpfs\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.089420 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.091545 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c573a3c6-adea-4c48-a71b-4644822c9caa-signing-cabundle\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.091913 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-registration-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092259 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-registration-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgqxz\" (UniqueName: \"kubernetes.io/projected/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-kube-api-access-dgqxz\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092390 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-serving-cert\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092427 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-config\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03eae766-055e-4339-a21d-f594802d636c-srv-cert\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092477 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0639041-89a4-4d66-9670-e360fb45626d-config\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092511 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwlwh\" (UniqueName: \"kubernetes.io/projected/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-kube-api-access-cwlwh\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-service-ca\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092583 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxg6m\" (UniqueName: \"kubernetes.io/projected/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-kube-api-access-rxg6m\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092609 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-oauth-config\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092638 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a84e98fc-8911-4fe1-8242-e906ccfdb277-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7cvlf\" (UID: \"a84e98fc-8911-4fe1-8242-e906ccfdb277\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092678 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-config-volume\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092706 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-trusted-ca-bundle\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s4sf\" (UniqueName: \"kubernetes.io/projected/483649f3-68d0-467c-b4ff-dfbb2b3c340a-kube-api-access-5s4sf\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.092985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2bpd\" (UniqueName: \"kubernetes.io/projected/25819472-0561-4e4c-ad89-abfe02bc8484-kube-api-access-g2bpd\") pod \"migrator-59844c95c7-bjvr8\" (UID: \"25819472-0561-4e4c-ad89-abfe02bc8484\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093029 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/100fdbf1-ca31-4a06-9f27-c3be6e08e887-trusted-ca\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093054 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-webhook-cert\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6990313f-0f9b-4a82-b072-4f094768a28e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093107 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da4379ba-31cd-436c-b1a6-8e715c0d2dca-config-volume\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093141 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxn2\" (UniqueName: \"kubernetes.io/projected/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-kube-api-access-cjxn2\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-proxy-tls\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-socket-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093386 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgzd\" (UniqueName: \"kubernetes.io/projected/a84e98fc-8911-4fe1-8242-e906ccfdb277-kube-api-access-wdgzd\") pod \"control-plane-machine-set-operator-78cbb6b69f-7cvlf\" (UID: \"a84e98fc-8911-4fe1-8242-e906ccfdb277\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc4tj\" (UniqueName: \"kubernetes.io/projected/ab19a35e-a9f2-44e8-9e8c-40339f0a9195-kube-api-access-lc4tj\") pod \"ingress-canary-kqb54\" (UID: \"ab19a35e-a9f2-44e8-9e8c-40339f0a9195\") " pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093518 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xc9ml\" (UID: \"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093547 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrgql\" (UniqueName: \"kubernetes.io/projected/c573a3c6-adea-4c48-a71b-4644822c9caa-kube-api-access-mrgql\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093574 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c573a3c6-adea-4c48-a71b-4644822c9caa-signing-key\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093619 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03eae766-055e-4339-a21d-f594802d636c-profile-collector-cert\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da4379ba-31cd-436c-b1a6-8e715c0d2dca-secret-volume\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.093925 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mbg8\" (UniqueName: \"kubernetes.io/projected/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-kube-api-access-7mbg8\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.094015 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcnhk\" (UniqueName: \"kubernetes.io/projected/3bb788e4-fad9-4416-9042-7a46d8ef83b3-kube-api-access-qcnhk\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.094173 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-certs\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.097916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/100fdbf1-ca31-4a06-9f27-c3be6e08e887-config\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.097955 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6990313f-0f9b-4a82-b072-4f094768a28e-trusted-ca\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.097988 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-node-bootstrap-token\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.098010 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-csi-data-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.098039 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d77ct\" (UniqueName: \"kubernetes.io/projected/da4379ba-31cd-436c-b1a6-8e715c0d2dca-kube-api-access-d77ct\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.098671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0639041-89a4-4d66-9670-e360fb45626d-config\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.099486 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/483649f3-68d0-467c-b4ff-dfbb2b3c340a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.100158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.097841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da4379ba-31cd-436c-b1a6-8e715c0d2dca-config-volume\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.100314 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:34.600295995 +0000 UTC m=+153.320169283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.100389 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-socket-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.101179 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-service-ca\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.101449 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.109402 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-config-volume\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.109416 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-csi-data-dir\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.109939 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-config\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.110556 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/100fdbf1-ca31-4a06-9f27-c3be6e08e887-config\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.110883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/483649f3-68d0-467c-b4ff-dfbb2b3c340a-srv-cert\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.111327 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dfb189df-fd59-4ce7-9cf9-56966dab7850-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m5twk\" (UID: \"dfb189df-fd59-4ce7-9cf9-56966dab7850\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.111678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c573a3c6-adea-4c48-a71b-4644822c9caa-signing-key\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.112107 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6990313f-0f9b-4a82-b072-4f094768a28e-metrics-tls\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.112227 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-oauth-config\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.113147 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-serving-cert\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.113689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-apiservice-cert\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.114332 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03eae766-055e-4339-a21d-f594802d636c-profile-collector-cert\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.114459 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-trusted-ca-bundle\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.115505 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab19a35e-a9f2-44e8-9e8c-40339f0a9195-cert\") pod \"ingress-canary-kqb54\" (UID: \"ab19a35e-a9f2-44e8-9e8c-40339f0a9195\") " pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.115792 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6990313f-0f9b-4a82-b072-4f094768a28e-trusted-ca\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.115992 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xc9ml\" (UID: \"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.116816 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/100fdbf1-ca31-4a06-9f27-c3be6e08e887-trusted-ca\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.117469 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.120563 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-certs\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.121147 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-webhook-cert\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.122389 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da4379ba-31cd-436c-b1a6-8e715c0d2dca-secret-volume\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.125254 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-bound-sa-token\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.125289 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.125352 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a84e98fc-8911-4fe1-8242-e906ccfdb277-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7cvlf\" (UID: \"a84e98fc-8911-4fe1-8242-e906ccfdb277\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.125353 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03eae766-055e-4339-a21d-f594802d636c-srv-cert\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.126325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-node-bootstrap-token\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.129226 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-proxy-tls\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.135473 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh49d\" (UniqueName: \"kubernetes.io/projected/f0639041-89a4-4d66-9670-e360fb45626d-kube-api-access-hh49d\") pod \"service-ca-operator-777779d784-5lpcd\" (UID: \"f0639041-89a4-4d66-9670-e360fb45626d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.199082 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.199673 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:34.699652273 +0000 UTC m=+153.419525561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.203536 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgrff\" (UniqueName: \"kubernetes.io/projected/03eae766-055e-4339-a21d-f594802d636c-kube-api-access-jgrff\") pod \"catalog-operator-68c6474976-npxg4\" (UID: \"03eae766-055e-4339-a21d-f594802d636c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.212110 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.228493 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.233719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" event={"ID":"0e7822fc-7419-4806-907b-b442a62f4baf","Type":"ContainerStarted","Data":"07f8f4af3b3f7a846b56d117945df4ce50a9de4de5524620fc2cce1bb8ff93ef"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.233745 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.248526 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" event={"ID":"1467a368-ffe2-4fd5-abca-e42018890e40","Type":"ContainerStarted","Data":"8d96f35fdc399458c5a49c6e9095c226c88c421ffa7ef615b495f514f2d905a8"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.248977 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" event={"ID":"1467a368-ffe2-4fd5-abca-e42018890e40","Type":"ContainerStarted","Data":"ea09e34340fa961e5878e97cee435e480c459794f643140ba0222c42b006c388"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.250997 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" event={"ID":"3a00e317-0d77-42ec-b31c-916797497da3","Type":"ContainerStarted","Data":"f7565aea5dbeb2278bb3c450d4bc1c709fb50c5d40ffd5f2bbc8b3bb26be39ec"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.256800 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" event={"ID":"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77","Type":"ContainerStarted","Data":"df8be331ddf2318f6b38440521d63cf012f10b0c87d30b1db792eb9c3470216c"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.265109 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" event={"ID":"69dbf7a5-1a4e-403b-8830-4e4b49305af5","Type":"ContainerStarted","Data":"9d08c051a5b4d4cb07b12c760dc3e55238aeb47705a29145731bd20bd5dc2e6e"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.268983 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" event={"ID":"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5","Type":"ContainerStarted","Data":"f03490bb6a056b13d32f29c4fd53f96da500a5285f84f272abb1770cb37e8fe0"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.269049 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" event={"ID":"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5","Type":"ContainerStarted","Data":"890bbdfadadec3399cb7bb5649c353a165fa59fb34bb75d3f32e8adb51b63a3f"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.269454 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.271079 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-h4w4g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.271118 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.275326 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" event={"ID":"cfbb848d-6faf-4397-94fa-49b2a319c091","Type":"ContainerStarted","Data":"d26d4113b2cad0fd2411a14c9dd7fc88e9b53b498f2e106cf14a59a33cbaa7fa"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.275357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" event={"ID":"cfbb848d-6faf-4397-94fa-49b2a319c091","Type":"ContainerStarted","Data":"d45d88f70d5c2c2e143e761f690e4cebb829123d3bca2e32d85fdf5d5d2f421f"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.279864 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq78w\" (UniqueName: \"kubernetes.io/projected/6990313f-0f9b-4a82-b072-4f094768a28e-kube-api-access-cq78w\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.301542 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.303106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.303723 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:34.803712466 +0000 UTC m=+153.523585754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.306397 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" event={"ID":"02af14b8-f5ac-4ce9-a001-8389192957e1","Type":"ContainerStarted","Data":"f4bd1ed203aa0c5f9a231067fc74229d2f3297280f2727aa5f731537ad22e238"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.309329 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" event={"ID":"a144ed50-2315-4874-a8f3-2f1f39111666","Type":"ContainerStarted","Data":"4bcd1e7b9a76247845bae70c1d0eb5dd9ee7eca37edf2e24ebb081ca3b25e28a"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.311214 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rfsmt" event={"ID":"a5317443-5085-4cd9-b3fb-6b8282746932","Type":"ContainerStarted","Data":"e4a3dc4e6f7e35538e6256c0f6ad39df6f715aed1bb1109a5b3b7e0482c7ca36"} Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.322622 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m"] Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.329627 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d54hp"] Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.339588 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnmwt"] Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.342471 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvh55\" (UniqueName: \"kubernetes.io/projected/7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c-kube-api-access-qvh55\") pod \"package-server-manager-789f6589d5-xc9ml\" (UID: \"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.345985 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr9g7\" (UniqueName: \"kubernetes.io/projected/b0f4e8b0-b8c3-4243-867e-2d63b524aebd-kube-api-access-pr9g7\") pod \"machine-config-server-s85hw\" (UID: \"b0f4e8b0-b8c3-4243-867e-2d63b524aebd\") " pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.346807 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgqxz\" (UniqueName: \"kubernetes.io/projected/ec9c8003-615a-49f5-b23a-3fad6ba93ffd-kube-api-access-dgqxz\") pod \"kube-storage-version-migrator-operator-b67b599dd-b854t\" (UID: \"ec9c8003-615a-49f5-b23a-3fad6ba93ffd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.346882 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.347054 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlzh6\" (UniqueName: \"kubernetes.io/projected/697d22ea-60a0-44b5-a5fb-17c7c9caaadb-kube-api-access-nlzh6\") pod \"dns-default-bnwdd\" (UID: \"697d22ea-60a0-44b5-a5fb-17c7c9caaadb\") " pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.347335 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49r8j\" (UniqueName: \"kubernetes.io/projected/100fdbf1-ca31-4a06-9f27-c3be6e08e887-kube-api-access-49r8j\") pod \"console-operator-58897d9998-zfr4t\" (UID: \"100fdbf1-ca31-4a06-9f27-c3be6e08e887\") " pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.347345 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv8hv\" (UniqueName: \"kubernetes.io/projected/4cfa9c40-ad55-4bb3-b7ba-4325816a760d-kube-api-access-tv8hv\") pod \"cluster-samples-operator-665b6dd947-2dg4s\" (UID: \"4cfa9c40-ad55-4bb3-b7ba-4325816a760d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.352741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94xh\" (UniqueName: \"kubernetes.io/projected/dfb189df-fd59-4ce7-9cf9-56966dab7850-kube-api-access-b94xh\") pod \"multus-admission-controller-857f4d67dd-m5twk\" (UID: \"dfb189df-fd59-4ce7-9cf9-56966dab7850\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.354309 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.358371 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6990313f-0f9b-4a82-b072-4f094768a28e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pc548\" (UID: \"6990313f-0f9b-4a82-b072-4f094768a28e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.445568 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.445797 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.446832 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-s85hw" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.446845 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.447023 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.447181 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:34.947161832 +0000 UTC m=+153.667035120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.447394 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.448569 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:34.948556092 +0000 UTC m=+153.668429380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.450023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc4tj\" (UniqueName: \"kubernetes.io/projected/ab19a35e-a9f2-44e8-9e8c-40339f0a9195-kube-api-access-lc4tj\") pod \"ingress-canary-kqb54\" (UID: \"ab19a35e-a9f2-44e8-9e8c-40339f0a9195\") " pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.477581 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgzd\" (UniqueName: \"kubernetes.io/projected/a84e98fc-8911-4fe1-8242-e906ccfdb277-kube-api-access-wdgzd\") pod \"control-plane-machine-set-operator-78cbb6b69f-7cvlf\" (UID: \"a84e98fc-8911-4fe1-8242-e906ccfdb277\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.486814 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxg6m\" (UniqueName: \"kubernetes.io/projected/e6cfdf35-7edc-48ac-b81b-45d7d57c7654-kube-api-access-rxg6m\") pod \"packageserver-d55dfcdfc-rjxlz\" (UID: \"e6cfdf35-7edc-48ac-b81b-45d7d57c7654\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.505074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxn2\" (UniqueName: \"kubernetes.io/projected/37dfc0ce-5c63-4de8-8b92-08626d0ef9c6-kube-api-access-cjxn2\") pod \"machine-config-operator-74547568cd-pk2hm\" (UID: \"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.506796 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mbg8\" (UniqueName: \"kubernetes.io/projected/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-kube-api-access-7mbg8\") pod \"console-f9d7485db-n7xmf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.507042 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d77ct\" (UniqueName: \"kubernetes.io/projected/da4379ba-31cd-436c-b1a6-8e715c0d2dca-kube-api-access-d77ct\") pod \"collect-profiles-29487330-7xnrl\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.508753 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrgql\" (UniqueName: \"kubernetes.io/projected/c573a3c6-adea-4c48-a71b-4644822c9caa-kube-api-access-mrgql\") pod \"service-ca-9c57cc56f-58kmq\" (UID: \"c573a3c6-adea-4c48-a71b-4644822c9caa\") " pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.566255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwlwh\" (UniqueName: \"kubernetes.io/projected/8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c-kube-api-access-cwlwh\") pod \"csi-hostpathplugin-gbdm8\" (UID: \"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c\") " pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.566774 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.567372 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.567765 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.06774949 +0000 UTC m=+153.787622778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.570473 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.577530 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.581586 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2bpd\" (UniqueName: \"kubernetes.io/projected/25819472-0561-4e4c-ad89-abfe02bc8484-kube-api-access-g2bpd\") pod \"migrator-59844c95c7-bjvr8\" (UID: \"25819472-0561-4e4c-ad89-abfe02bc8484\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.584119 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcnhk\" (UniqueName: \"kubernetes.io/projected/3bb788e4-fad9-4416-9042-7a46d8ef83b3-kube-api-access-qcnhk\") pod \"marketplace-operator-79b997595-h8gjk\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.590780 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.598011 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.604885 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.610937 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.612113 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s4sf\" (UniqueName: \"kubernetes.io/projected/483649f3-68d0-467c-b4ff-dfbb2b3c340a-kube-api-access-5s4sf\") pod \"olm-operator-6b444d44fb-b58b6\" (UID: \"483649f3-68d0-467c-b4ff-dfbb2b3c340a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.618810 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.626847 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.633760 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.640383 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.670260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.670753 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.170734222 +0000 UTC m=+153.890607500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.676570 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.700014 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kqb54" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.825152 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.826080 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.326000048 +0000 UTC m=+154.045873336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.856432 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:34 crc kubenswrapper[4705]: I0124 07:43:34.937722 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:34 crc kubenswrapper[4705]: E0124 07:43:34.938295 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.438276687 +0000 UTC m=+154.158149975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.039164 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.039615 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.539578281 +0000 UTC m=+154.259451569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.144151 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.144588 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.64456522 +0000 UTC m=+154.364438518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.223572 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8"] Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.237207 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd"] Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.247946 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.248612 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.748592742 +0000 UTC m=+154.468466030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.249002 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.249761 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.749747836 +0000 UTC m=+154.469621124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.290620 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" podStartSLOduration=132.290601451 podStartE2EDuration="2m12.290601451s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:35.290054465 +0000 UTC m=+154.009927753" watchObservedRunningTime="2026-01-24 07:43:35.290601451 +0000 UTC m=+154.010474749" Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.376879 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.377386 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.877339875 +0000 UTC m=+154.597213163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.377678 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.380282 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.880260709 +0000 UTC m=+154.600133997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.388204 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88"] Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.480955 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.481375 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.981355087 +0000 UTC m=+154.701228385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.481764 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.482162 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:35.98215208 +0000 UTC m=+154.702025368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.499680 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" event={"ID":"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc","Type":"ContainerStarted","Data":"d6ac6185a71485a5c3f30841ab634f9ffc1e4bad6d03e899856ac739d7da4e03"} Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.518408 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" event={"ID":"23a30e75-f6c5-417a-8931-2467fa9615a8","Type":"ContainerStarted","Data":"26e8f7423b2a50b83737aaa5bc6145304f2a3f4160f0c8471d9b47ed76621e14"} Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.520428 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" event={"ID":"90ec9237-0f8d-4641-8e07-7fb662297324","Type":"ContainerStarted","Data":"c485c137813bb676013bc0fdc4b54acfe594c7be5157b121c4ac1d4be24aa380"} Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.524003 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" event={"ID":"1467a368-ffe2-4fd5-abca-e42018890e40","Type":"ContainerStarted","Data":"913b1de014bd27e0b7293202861605aaa6c807c7f1be1509eab04d393896c15c"} Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.549933 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-s85hw" event={"ID":"b0f4e8b0-b8c3-4243-867e-2d63b524aebd","Type":"ContainerStarted","Data":"5b57a10bc0792293b99f90a8f94baee2e179bf7287b91731f5f800698f8472d1"} Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.550328 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-h4w4g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.550458 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.583484 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.583907 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.083885846 +0000 UTC m=+154.803759134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.684954 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.685690 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v85qc" podStartSLOduration=134.685668523 podStartE2EDuration="2m14.685668523s" podCreationTimestamp="2026-01-24 07:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:35.574425744 +0000 UTC m=+154.294299042" watchObservedRunningTime="2026-01-24 07:43:35.685668523 +0000 UTC m=+154.405541831" Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.686429 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.186373034 +0000 UTC m=+154.906246392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: W0124 07:43:35.779367 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0639041_89a4_4d66_9670_e360fb45626d.slice/crio-a54e0ccc410acb50fd2486d74714f4855777555be73e83a9cfeb4f2a6afafaae WatchSource:0}: Error finding container a54e0ccc410acb50fd2486d74714f4855777555be73e83a9cfeb4f2a6afafaae: Status 404 returned error can't find the container with id a54e0ccc410acb50fd2486d74714f4855777555be73e83a9cfeb4f2a6afafaae Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.786110 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.786481 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.286467993 +0000 UTC m=+155.006341281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:35 crc kubenswrapper[4705]: I0124 07:43:35.898058 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:35 crc kubenswrapper[4705]: E0124 07:43:35.898600 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.398580087 +0000 UTC m=+155.118453415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.141722 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.141833 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.641800353 +0000 UTC m=+155.361673641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.142724 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.143123 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.64311316 +0000 UTC m=+155.362986438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.176096 4705 csr.go:261] certificate signing request csr-k7jl8 is approved, waiting to be issued Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.184882 4705 csr.go:257] certificate signing request csr-k7jl8 is issued Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.246040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.248109 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.748081289 +0000 UTC m=+155.467954577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.254459 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.255122 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.755109421 +0000 UTC m=+155.474982709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.356333 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.356662 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:36.856642241 +0000 UTC m=+155.576515529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.598915 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.599234 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.099222068 +0000 UTC m=+155.819095356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.701492 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.701663 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.201639144 +0000 UTC m=+155.921512432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.701801 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.702141 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.202130338 +0000 UTC m=+155.922003626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.802794 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.803112 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.303096652 +0000 UTC m=+156.022969940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.894406 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" event={"ID":"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77","Type":"ContainerStarted","Data":"1d03f5180351561f6578c4f3db7a550b571fecdd6370cddde1c870694f1544cc"} Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.895413 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.904440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:36 crc kubenswrapper[4705]: E0124 07:43:36.905019 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.405002453 +0000 UTC m=+156.124875741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.906294 4705 generic.go:334] "Generic (PLEG): container finished" podID="02af14b8-f5ac-4ce9-a001-8389192957e1" containerID="3f032873f741e2b443067afd3c0301c0a41b4964bccaba45b6df9277a9a341ca" exitCode=0 Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.906370 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" event={"ID":"02af14b8-f5ac-4ce9-a001-8389192957e1","Type":"ContainerDied","Data":"3f032873f741e2b443067afd3c0301c0a41b4964bccaba45b6df9277a9a341ca"} Jan 24 07:43:36 crc kubenswrapper[4705]: I0124 07:43:36.983989 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" event={"ID":"a144ed50-2315-4874-a8f3-2f1f39111666","Type":"ContainerStarted","Data":"bba2d8963ca202a686a55b45222d96a771d5ec1f8c632d6dc29bfc67681c2e25"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.021508 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.021871 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.521836443 +0000 UTC m=+156.241709731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.022213 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.025548 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" event={"ID":"69dbf7a5-1a4e-403b-8830-4e4b49305af5","Type":"ContainerStarted","Data":"efb7775be0d2e9dd62c5505ac3c9827ff721124efbdec5c5d823b8f437a60a24"} Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.032168 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.532132009 +0000 UTC m=+156.252005297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.042311 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" event={"ID":"b6e67495-3fae-45d0-a5d5-4741c0a763c9","Type":"ContainerStarted","Data":"dec576180d4d17ccb317693709953bddbefbe123ad4c7935d837677a72af131c"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.050260 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-27hpm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.050332 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.052515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" event={"ID":"9f220961-3dce-41b1-8ec4-26dece52318b","Type":"ContainerStarted","Data":"c0492c27ea52be2f18d8ec010322259dec7bcd1046063cf374a7e27d7d9176e6"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.055476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rfsmt" event={"ID":"a5317443-5085-4cd9-b3fb-6b8282746932","Type":"ContainerStarted","Data":"52c64d2eb1196c2071a73e52c646e090853937f05418c38e7b0afbaf5eb84fc8"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.061074 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" event={"ID":"90ec9237-0f8d-4641-8e07-7fb662297324","Type":"ContainerStarted","Data":"b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.062025 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.079896 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.079955 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.080293 4705 generic.go:334] "Generic (PLEG): container finished" podID="0e7822fc-7419-4806-907b-b442a62f4baf" containerID="cafe5881be3e6bdca64e954adc5248a88e25d2a1e0b61ede7a3f6ffc3b1046c0" exitCode=0 Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.080423 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" event={"ID":"0e7822fc-7419-4806-907b-b442a62f4baf","Type":"ContainerDied","Data":"cafe5881be3e6bdca64e954adc5248a88e25d2a1e0b61ede7a3f6ffc3b1046c0"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.081509 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" event={"ID":"f0639041-89a4-4d66-9670-e360fb45626d","Type":"ContainerStarted","Data":"a54e0ccc410acb50fd2486d74714f4855777555be73e83a9cfeb4f2a6afafaae"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.083153 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a00e317-0d77-42ec-b31c-916797497da3" containerID="36d8fe76f92da2c7bd3f25c368df3666bc453bd1a3ab4b1597981dfe59d9baaf" exitCode=0 Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.083206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" event={"ID":"3a00e317-0d77-42ec-b31c-916797497da3","Type":"ContainerDied","Data":"36d8fe76f92da2c7bd3f25c368df3666bc453bd1a3ab4b1597981dfe59d9baaf"} Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.086451 4705 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rnmwt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.086579 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.148506 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.150023 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.650007609 +0000 UTC m=+156.369880897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.189632 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-24 07:38:36 +0000 UTC, rotation deadline is 2026-11-08 01:47:47.575331025 +0000 UTC Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.189681 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6906h4m10.385653775s for next certificate rotation Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.251578 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.266642 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.766621153 +0000 UTC m=+156.486494441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.357008 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.357508 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.857486307 +0000 UTC m=+156.577359585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.357981 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.358455 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.858444104 +0000 UTC m=+156.578317392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.463120 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.963087014 +0000 UTC m=+156.682960302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.462197 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.468841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.469809 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:37.969786447 +0000 UTC m=+156.689659735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.600731 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.606437 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:38.106390886 +0000 UTC m=+156.826264574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:37 crc kubenswrapper[4705]: I0124 07:43:37.707249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:37 crc kubenswrapper[4705]: E0124 07:43:37.707992 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:38.207963887 +0000 UTC m=+156.927837175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:37.925321 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:37.925637 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:38.425622307 +0000 UTC m=+157.145495595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.032308 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.038398 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.039013 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:38.538992788 +0000 UTC m=+157.258866076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.140643 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.141142 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:38.641117856 +0000 UTC m=+157.360991154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.326954 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.327344 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:38.827331851 +0000 UTC m=+157.547205139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.343392 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-27hpm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.343445 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.343605 4705 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rnmwt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.343647 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.497551 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.497943 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:38.997924698 +0000 UTC m=+157.717797976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.602032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.603202 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.103187245 +0000 UTC m=+157.823060533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.702937 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.703530 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.203507561 +0000 UTC m=+157.923380849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.805022 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.805700 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.305684559 +0000 UTC m=+158.025557847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.903243 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqzgc" podStartSLOduration=135.903221365 podStartE2EDuration="2m15.903221365s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:38.834393045 +0000 UTC m=+157.554266333" watchObservedRunningTime="2026-01-24 07:43:38.903221365 +0000 UTC m=+157.623094653" Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.911093 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:38 crc kubenswrapper[4705]: E0124 07:43:38.911472 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.411458902 +0000 UTC m=+158.131332190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.925751 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:38 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:38 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:38 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.925801 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.933484 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:38 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:38 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:38 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:38 crc kubenswrapper[4705]: I0124 07:43:38.933550 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.061762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.062132 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.562119705 +0000 UTC m=+158.281992993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.165473 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.165756 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.665742026 +0000 UTC m=+158.385615314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.173287 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" podStartSLOduration=135.173258742 podStartE2EDuration="2m15.173258742s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.16450104 +0000 UTC m=+157.884374328" watchObservedRunningTime="2026-01-24 07:43:39.173258742 +0000 UTC m=+157.893132040" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.295875 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.296321 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.796302491 +0000 UTC m=+158.516175779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.339612 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hwqmb" podStartSLOduration=137.339596986 podStartE2EDuration="2m17.339596986s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.33939563 +0000 UTC m=+158.059268928" watchObservedRunningTime="2026-01-24 07:43:39.339596986 +0000 UTC m=+158.059470274" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.400606 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.400998 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:39.900979992 +0000 UTC m=+158.620853280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.675681 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.676251 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.176235728 +0000 UTC m=+158.896109016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: W0124 07:43:39.713331 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab19a35e_a9f2_44e8_9e8c_40339f0a9195.slice/crio-605e813f42dffd07bd7438ccdf59928c4f89103a87af858343ae71d9b6ebde45 WatchSource:0}: Error finding container 605e813f42dffd07bd7438ccdf59928c4f89103a87af858343ae71d9b6ebde45: Status 404 returned error can't find the container with id 605e813f42dffd07bd7438ccdf59928c4f89103a87af858343ae71d9b6ebde45 Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.728859 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-s85hw" event={"ID":"b0f4e8b0-b8c3-4243-867e-2d63b524aebd","Type":"ContainerStarted","Data":"2162a8be3174d4eb32986cd3c103a69da7d8002e85bc1ac0c8e0a59dab6cdd3d"} Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.728903 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kqb54"] Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.756047 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rfsmt" podStartSLOduration=136.756012173 podStartE2EDuration="2m16.756012173s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.740212168 +0000 UTC m=+158.460085456" watchObservedRunningTime="2026-01-24 07:43:39.756012173 +0000 UTC m=+158.475885511" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.775775 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" event={"ID":"f0639041-89a4-4d66-9670-e360fb45626d","Type":"ContainerStarted","Data":"085b384ff42855e521e75b21b499dbb2222a4238fd15c8d22c4639681f74b4fb"} Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.776936 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.777374 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.277359857 +0000 UTC m=+158.997233145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.804025 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" event={"ID":"3a00e317-0d77-42ec-b31c-916797497da3","Type":"ContainerStarted","Data":"e1a28ea6b57e47c5750538895352e873781e1813d4f350132b7d6e380dc875db"} Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.805988 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.839714 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mz97d"] Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.849309 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" event={"ID":"9f220961-3dce-41b1-8ec4-26dece52318b","Type":"ContainerStarted","Data":"d660dfa418509dc173a90b55efb75787e69853d6a37a1e7362a8f56a9b0ae55d"} Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.851086 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" podStartSLOduration=137.851071457 podStartE2EDuration="2m17.851071457s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.832866243 +0000 UTC m=+158.552739531" watchObservedRunningTime="2026-01-24 07:43:39.851071457 +0000 UTC m=+158.570944745" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.881019 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" event={"ID":"23a30e75-f6c5-417a-8931-2467fa9615a8","Type":"ContainerStarted","Data":"63fe45b44153fa8c05b98c2671d2b578b30375d6a1186048eeed19c742910862"} Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.881739 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" podStartSLOduration=137.881716648 podStartE2EDuration="2m17.881716648s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.873585864 +0000 UTC m=+158.593459152" watchObservedRunningTime="2026-01-24 07:43:39.881716648 +0000 UTC m=+158.601589936" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.884249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.885384 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.385371813 +0000 UTC m=+159.105245101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.891505 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zfr4t"] Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.900844 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:39 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:39 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:39 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.900897 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.922115 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5lpcd" podStartSLOduration=135.9220988 podStartE2EDuration="2m15.9220988s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.92176828 +0000 UTC m=+158.641641568" watchObservedRunningTime="2026-01-24 07:43:39.9220988 +0000 UTC m=+158.641972088" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.932964 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-s85hw" podStartSLOduration=8.932944351 podStartE2EDuration="8.932944351s" podCreationTimestamp="2026-01-24 07:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.902932408 +0000 UTC m=+158.622805696" watchObservedRunningTime="2026-01-24 07:43:39.932944351 +0000 UTC m=+158.652817639" Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.947156 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" event={"ID":"02af14b8-f5ac-4ce9-a001-8389192957e1","Type":"ContainerStarted","Data":"0b5bd831571ed3ab768a5110f57843cc38798a1bb056bfd89c060054b6fae534"} Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.985566 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:39 crc kubenswrapper[4705]: E0124 07:43:39.986778 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.486756548 +0000 UTC m=+159.206629836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:39 crc kubenswrapper[4705]: I0124 07:43:39.991098 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" event={"ID":"0e7822fc-7419-4806-907b-b442a62f4baf","Type":"ContainerStarted","Data":"ba9438a09ca492f83859c9f3a5a5d7d6acf9f9c498072372d740b4fef2d09da6"} Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.003749 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lfjc8" podStartSLOduration=137.003733447 podStartE2EDuration="2m17.003733447s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:39.9669807 +0000 UTC m=+158.686853988" watchObservedRunningTime="2026-01-24 07:43:40.003733447 +0000 UTC m=+158.723606735" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.005039 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d54hp" podStartSLOduration=137.005035274 podStartE2EDuration="2m17.005035274s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:40.003670035 +0000 UTC m=+158.723543323" watchObservedRunningTime="2026-01-24 07:43:40.005035274 +0000 UTC m=+158.724908562" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.051853 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" event={"ID":"592b0b4b-6b38-43e1-88cb-cfc5fb20f3cc","Type":"ContainerStarted","Data":"a36b6b0ad2726acdc83e98e6d44fef5d8cfb7f30c037c618d577256bda633ce5"} Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.091148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.093004 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.592988004 +0000 UTC m=+159.312861292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.119166 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" event={"ID":"a144ed50-2315-4874-a8f3-2f1f39111666","Type":"ContainerStarted","Data":"3a0892953c19ffc23f7a020c66e9a6ec9a3e51eb6c004cb8058a5d21ed671802"} Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.128107 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bnwdd"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.159162 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.185946 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" event={"ID":"b6e67495-3fae-45d0-a5d5-4741c0a763c9","Type":"ContainerStarted","Data":"9611d6b0230c45c636cd15fd2441da466de0a132e007aa9f69640a692762542d"} Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.197502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.198587 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.6985658 +0000 UTC m=+159.418439088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.202209 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" podStartSLOduration=136.202185264 podStartE2EDuration="2m16.202185264s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:40.091845701 +0000 UTC m=+158.811718989" watchObservedRunningTime="2026-01-24 07:43:40.202185264 +0000 UTC m=+158.922058552" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.203038 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h2p8m" podStartSLOduration=137.203033599 podStartE2EDuration="2m17.203033599s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:40.119691492 +0000 UTC m=+158.839564780" watchObservedRunningTime="2026-01-24 07:43:40.203033599 +0000 UTC m=+158.922906887" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.218167 4705 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rnmwt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.218231 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.249809 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gjggq" podStartSLOduration=137.249785474 podStartE2EDuration="2m17.249785474s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:40.184953759 +0000 UTC m=+158.904827047" watchObservedRunningTime="2026-01-24 07:43:40.249785474 +0000 UTC m=+158.969658762" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.258117 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.285007 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5v88" podStartSLOduration=137.284990396 podStartE2EDuration="2m17.284990396s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:40.239646652 +0000 UTC m=+158.959519940" watchObservedRunningTime="2026-01-24 07:43:40.284990396 +0000 UTC m=+159.004863674" Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.300989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.307333 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.807311438 +0000 UTC m=+159.527184786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.348988 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gbdm8"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.410891 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.411516 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:40.911489525 +0000 UTC m=+159.631362813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: W0124 07:43:40.431580 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8590cbe2_0d5d_4cc8_8a2e_36547eff6d6c.slice/crio-d2b3668fe805fbc2f28b1db8f94f91b3342642bff7756ace56ba34c55476e4a4 WatchSource:0}: Error finding container d2b3668fe805fbc2f28b1db8f94f91b3342642bff7756ace56ba34c55476e4a4: Status 404 returned error can't find the container with id d2b3668fe805fbc2f28b1db8f94f91b3342642bff7756ace56ba34c55476e4a4 Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.440028 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.463584 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.503954 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.513619 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.514093 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.014070615 +0000 UTC m=+159.733943903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.607513 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.617845 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.618889 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.618950 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.118936051 +0000 UTC m=+159.838809339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.619469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.620021 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.120009842 +0000 UTC m=+159.839883130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.634998 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7czb5"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.636462 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t774k"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.651075 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.657929 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.660756 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-58kmq"] Jan 24 07:43:40 crc kubenswrapper[4705]: W0124 07:43:40.671612 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a5ea77a_4a2f_4be0_81d3_9ecf84cfbe2c.slice/crio-943300d1645b6629ce33efa298f4caf38af08f4f8e92f3667ce7d052d129c620 WatchSource:0}: Error finding container 943300d1645b6629ce33efa298f4caf38af08f4f8e92f3667ce7d052d129c620: Status 404 returned error can't find the container with id 943300d1645b6629ce33efa298f4caf38af08f4f8e92f3667ce7d052d129c620 Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.680785 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.709843 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n7xmf"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.720948 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.721159 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.22111932 +0000 UTC m=+159.940992608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.721450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.721815 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.22180257 +0000 UTC m=+159.941675858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.751379 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm"] Jan 24 07:43:40 crc kubenswrapper[4705]: W0124 07:43:40.759171 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf8e8ead_4e67_4775_bce3_b48236e30573.slice/crio-33e694c8f5eb61699dd4f93425632a6dd20c959c0cba097f9189a665f2457c57 WatchSource:0}: Error finding container 33e694c8f5eb61699dd4f93425632a6dd20c959c0cba097f9189a665f2457c57: Status 404 returned error can't find the container with id 33e694c8f5eb61699dd4f93425632a6dd20c959c0cba097f9189a665f2457c57 Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.762221 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.777441 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h8gjk"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.797954 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pc548"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.799805 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m5twk"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.823385 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.823891 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.323866985 +0000 UTC m=+160.043740273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.933670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6"] Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.935910 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:40 crc kubenswrapper[4705]: E0124 07:43:40.936381 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.436356991 +0000 UTC m=+160.156230279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.952450 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:40 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:40 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:40 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:40 crc kubenswrapper[4705]: I0124 07:43:40.952540 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.045421 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.045950 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.545932342 +0000 UTC m=+160.265805630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.148812 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.149279 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.649261694 +0000 UTC m=+160.369134972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.251917 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.252301 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.752280807 +0000 UTC m=+160.472154095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.253708 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" event={"ID":"6887bb61-9f22-4386-b263-866334b6529e","Type":"ContainerStarted","Data":"79283b705639d63cb4df2140054b49f52d632b3603a5e344e2e29a846af6bc31"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.255296 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" event={"ID":"da4379ba-31cd-436c-b1a6-8e715c0d2dca","Type":"ContainerStarted","Data":"5a53b0ae12e7ea394584ad250c9267bde4481c79c7e811dd3bdf39dc1e755ae8"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.256265 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" event={"ID":"3bb788e4-fad9-4416-9042-7a46d8ef83b3","Type":"ContainerStarted","Data":"857a0f42e0fc856d125e2ddb15532ac88b578dddeed392768d182f0c7ef69a99"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.282066 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" event={"ID":"0e7822fc-7419-4806-907b-b442a62f4baf","Type":"ContainerStarted","Data":"1a527c7b401319501b90debf59c548439bff2fe5c705d5fd42368a62c02f5b8a"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.319076 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" event={"ID":"100fdbf1-ca31-4a06-9f27-c3be6e08e887","Type":"ContainerStarted","Data":"ee87a99b788443514d09b822f92e1e825bda9409417e34fceb4e2021ac5e07e2"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.319421 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" event={"ID":"100fdbf1-ca31-4a06-9f27-c3be6e08e887","Type":"ContainerStarted","Data":"ae9ca7b43233cac3f337578a754a2772e99dfccc1f74a994eddac1ddf6335ada"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.319922 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.328668 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-zfr4t container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/readyz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.328745 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" podUID="100fdbf1-ca31-4a06-9f27-c3be6e08e887" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/readyz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.328746 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" event={"ID":"84a6e017-145f-4005-85b2-3f027185ed6c","Type":"ContainerStarted","Data":"f824131b6310558c21176020d09195a7628bddb029233591a720f179e869715d"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.329428 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" podStartSLOduration=139.329406455 podStartE2EDuration="2m19.329406455s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:41.32921353 +0000 UTC m=+160.049086818" watchObservedRunningTime="2026-01-24 07:43:41.329406455 +0000 UTC m=+160.049279743" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.334884 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7czb5" event={"ID":"c30fd97b-0555-479c-969f-4148e7bfb66d","Type":"ContainerStarted","Data":"e084d8d76d2b7e2ca7f3ceb56eae2a8737b100747bb30df0a4d7c4ed25538654"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.341343 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" event={"ID":"df8e8ead-4e67-4775-bce3-b48236e30573","Type":"ContainerStarted","Data":"33e694c8f5eb61699dd4f93425632a6dd20c959c0cba097f9189a665f2457c57"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.350036 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" event={"ID":"544a6521-1632-42e7-aedc-c26453958c18","Type":"ContainerStarted","Data":"8e040e36fe7478704fefda80dc941211bfaa90b7f472785512e5597ce028850a"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.350101 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" event={"ID":"544a6521-1632-42e7-aedc-c26453958c18","Type":"ContainerStarted","Data":"b656d85b01d59907da0abccecc56e1ae47a400ff91fcde88cc6612bd1fbc3611"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.353890 4705 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pg499 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.353935 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" podUID="3a00e317-0d77-42ec-b31c-916797497da3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.354034 4705 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pg499 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.354053 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" podUID="3a00e317-0d77-42ec-b31c-916797497da3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.355340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.356594 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.856564366 +0000 UTC m=+160.576437714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.376278 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" event={"ID":"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c","Type":"ContainerStarted","Data":"d2b3668fe805fbc2f28b1db8f94f91b3342642bff7756ace56ba34c55476e4a4"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.380778 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n7xmf" event={"ID":"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf","Type":"ContainerStarted","Data":"6f65c273b1c4553e8d9011893a027e1be2ebf39a3606e40d444cd0e5499f6c8a"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.381266 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" podStartSLOduration=139.381253437 podStartE2EDuration="2m19.381253437s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:41.378744605 +0000 UTC m=+160.098617883" watchObservedRunningTime="2026-01-24 07:43:41.381253437 +0000 UTC m=+160.101126715" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.396883 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" event={"ID":"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c","Type":"ContainerStarted","Data":"943300d1645b6629ce33efa298f4caf38af08f4f8e92f3667ce7d052d129c620"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.426234 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" event={"ID":"483649f3-68d0-467c-b4ff-dfbb2b3c340a","Type":"ContainerStarted","Data":"4ac8a775eae3fd1ffd7316a0c34246b38e0bab52b50425f06dcce539e45b840f"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.448381 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kqb54" event={"ID":"ab19a35e-a9f2-44e8-9e8c-40339f0a9195","Type":"ContainerStarted","Data":"5d88d67b4e689bbaaa742a892b5e5cc49e50a4ecc0ec269f6164483f83fcfcda"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.448425 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kqb54" event={"ID":"ab19a35e-a9f2-44e8-9e8c-40339f0a9195","Type":"ContainerStarted","Data":"605e813f42dffd07bd7438ccdf59928c4f89103a87af858343ae71d9b6ebde45"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.450790 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" event={"ID":"a84e98fc-8911-4fe1-8242-e906ccfdb277","Type":"ContainerStarted","Data":"00ccc47b5dff1677879ef760c32e6136974d72d01e9780d5d4031fae331362a7"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.452253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" event={"ID":"dfb189df-fd59-4ce7-9cf9-56966dab7850","Type":"ContainerStarted","Data":"79e6878e665fea8a438fd4063e768115f27f28801307bd9e518c3fa72f5ad79f"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.454762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" event={"ID":"25819472-0561-4e4c-ad89-abfe02bc8484","Type":"ContainerStarted","Data":"e35a7d89701fbd7c9f8637d30a823e70f69c7030ab20594ed4945480539343f9"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.456208 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.457807 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" event={"ID":"6990313f-0f9b-4a82-b072-4f094768a28e","Type":"ContainerStarted","Data":"f7fd83703bc3b37eb3856d91504ff807f45e2ebe8d48327a5f2f2e5266498e8f"} Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.457938 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:41.957917822 +0000 UTC m=+160.677791140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.466105 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" event={"ID":"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6","Type":"ContainerStarted","Data":"b60725c101c9bceb6936a671351b93ce8a4956b7ad03fcc8fbfdc8c03953f707"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.467631 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" event={"ID":"e6cfdf35-7edc-48ac-b81b-45d7d57c7654","Type":"ContainerStarted","Data":"059cadcd13af708d4e072616e46b3f6b70054c1e87e38536a4ac3400e28fa825"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.474930 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" event={"ID":"ec9c8003-615a-49f5-b23a-3fad6ba93ffd","Type":"ContainerStarted","Data":"e930be0aebd2454bf40b0b6c480889b4034d272da14dbf7c0fd6dcf2138bf2a3"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.478961 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" event={"ID":"4cfa9c40-ad55-4bb3-b7ba-4325816a760d","Type":"ContainerStarted","Data":"8ce57158e9c46fd825d3415270ee7da13efd307fc9615e7b47b9345c6db87e30"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.480033 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bnwdd" event={"ID":"697d22ea-60a0-44b5-a5fb-17c7c9caaadb","Type":"ContainerStarted","Data":"283a1986d941fe191699d3a81b73bad2b4511d507004409e00ca7270c133c70e"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.480051 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bnwdd" event={"ID":"697d22ea-60a0-44b5-a5fb-17c7c9caaadb","Type":"ContainerStarted","Data":"64644fd52f1e99ce4235ef550b397bd2baed7ace5103cd0dbe93afd9f9df6fb4"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.480699 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" event={"ID":"c573a3c6-adea-4c48-a71b-4644822c9caa","Type":"ContainerStarted","Data":"2ce4535db001386bf17c64b55a3e9fc0b07b518d362ff9a490981d4184b416d7"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.482844 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" event={"ID":"03eae766-055e-4339-a21d-f594802d636c","Type":"ContainerStarted","Data":"fe67956ee2d06de7d22bff9e0e7838be6abe1cc7d4df6b93196279b240da9bc9"} Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.519311 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mz97d" podStartSLOduration=139.519292427 podStartE2EDuration="2m19.519292427s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:41.42101004 +0000 UTC m=+160.140883338" watchObservedRunningTime="2026-01-24 07:43:41.519292427 +0000 UTC m=+160.239165715" Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.566684 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.568330 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.068316837 +0000 UTC m=+160.788190125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.673392 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.674415 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.174388798 +0000 UTC m=+160.894262086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.775714 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.776341 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.27632778 +0000 UTC m=+160.996201068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:41 crc kubenswrapper[4705]: I0124 07:43:41.885004 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:41 crc kubenswrapper[4705]: E0124 07:43:41.885491 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.385475259 +0000 UTC m=+161.105348547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.005295 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.017286 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.51726564 +0000 UTC m=+161.237138928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.052207 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:42 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:42 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:42 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.055429 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.101870 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kqb54" podStartSLOduration=11.101851762 podStartE2EDuration="11.101851762s" podCreationTimestamp="2026-01-24 07:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:41.519895554 +0000 UTC m=+160.239768872" watchObservedRunningTime="2026-01-24 07:43:42.101851762 +0000 UTC m=+160.821725050" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.122324 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.131490 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.631465524 +0000 UTC m=+161.351338812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.233032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.234307 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.734295112 +0000 UTC m=+161.454168400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.352705 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.352884 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.353216 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.353604 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.853587403 +0000 UTC m=+161.573460691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.458620 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.459430 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:42.959414926 +0000 UTC m=+161.679288214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.482535 4705 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pg499 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.482595 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" podUID="3a00e317-0d77-42ec-b31c-916797497da3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.488635 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.585806 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.586286 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.086265975 +0000 UTC m=+161.806139273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.586372 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.586130 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.587000 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.587269 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.087256973 +0000 UTC m=+161.807130341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.774681 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.775318 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.275297082 +0000 UTC m=+161.995170370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.875809 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.876200 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.376185224 +0000 UTC m=+162.096058512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:42 crc kubenswrapper[4705]: I0124 07:43:42.979283 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:42 crc kubenswrapper[4705]: E0124 07:43:42.980515 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.480490674 +0000 UTC m=+162.200363962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.010773 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:43 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:43 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:43 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.011114 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.034724 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" event={"ID":"25819472-0561-4e4c-ad89-abfe02bc8484","Type":"ContainerStarted","Data":"29debeeb81b21097f10093a439abedf6884f10df568dc12f99063e463ec490f5"} Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.042043 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" event={"ID":"4cfa9c40-ad55-4bb3-b7ba-4325816a760d","Type":"ContainerStarted","Data":"cfa5a3dd036a9ff5edb794be736e82b37c1b6f2b41bffe4ff4c887207fb4fb7e"} Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.064943 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" event={"ID":"84a6e017-145f-4005-85b2-3f027185ed6c","Type":"ContainerStarted","Data":"ac72463971f9bb7cc88c7c74e45a12fac0f2583290385ad6a661ed3b052d8d25"} Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.074791 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" event={"ID":"e6cfdf35-7edc-48ac-b81b-45d7d57c7654","Type":"ContainerStarted","Data":"7b92494a76c566b7eb08dc74faafd47ed4d3494f7692c2b86ef19935aa11c812"} Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.075618 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.078097 4705 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rjxlz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.078165 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" podUID="e6cfdf35-7edc-48ac-b81b-45d7d57c7654" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.082770 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:43 crc kubenswrapper[4705]: E0124 07:43:43.084029 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.584015641 +0000 UTC m=+162.303888929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.097920 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" event={"ID":"ec9c8003-615a-49f5-b23a-3fad6ba93ffd","Type":"ContainerStarted","Data":"65a0cf5443aa723e7ff8cec18363f1bb664a24c929c2338ae34e7a080de0efef"} Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.190967 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:43 crc kubenswrapper[4705]: E0124 07:43:43.191620 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.691605216 +0000 UTC m=+162.411478504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.294920 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:43 crc kubenswrapper[4705]: E0124 07:43:43.301314 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.80129367 +0000 UTC m=+162.521166958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.425588 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:43 crc kubenswrapper[4705]: E0124 07:43:43.426056 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:43.926039678 +0000 UTC m=+162.645912966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.601177 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:43 crc kubenswrapper[4705]: E0124 07:43:43.601871 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:44.101853914 +0000 UTC m=+162.821727202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.740852 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:43 crc kubenswrapper[4705]: E0124 07:43:43.741222 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:44.241203232 +0000 UTC m=+162.961076520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.847544 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.939837 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.939871 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:43 crc kubenswrapper[4705]: E0124 07:43:43.940318 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:44.440303549 +0000 UTC m=+163.160176837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.960109 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:43 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:43 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:43 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:43 crc kubenswrapper[4705]: I0124 07:43:43.960191 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.069811 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.071257 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:44.571241915 +0000 UTC m=+163.291115203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.128750 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-zfr4t container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.129258 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" podUID="100fdbf1-ca31-4a06-9f27-c3be6e08e887" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.221856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.222521 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:44.722487595 +0000 UTC m=+163.442360893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.270039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" event={"ID":"c573a3c6-adea-4c48-a71b-4644822c9caa","Type":"ContainerStarted","Data":"968fbdab3c120f00b35c65cd73759c0c0003388ac807f8cc4af8c099a269b304"} Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.277303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" event={"ID":"3bb788e4-fad9-4416-9042-7a46d8ef83b3","Type":"ContainerStarted","Data":"0ebc6faa3a0b3d7dbd41ae1ccf8aadbbecde871135e49869f7d23000319f1c5b"} Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.278003 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.283018 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" event={"ID":"03eae766-055e-4339-a21d-f594802d636c","Type":"ContainerStarted","Data":"759d59dfdbe634640dbd49878a4820fbc6497ce88e5e0d5969bcaf6c4836c816"} Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.283972 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.286038 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h8gjk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.286044 4705 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-npxg4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.286090 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.286115 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" podUID="03eae766-055e-4339-a21d-f594802d636c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.287604 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7czb5" event={"ID":"c30fd97b-0555-479c-969f-4148e7bfb66d","Type":"ContainerStarted","Data":"b5c5dcf2f51f8939a8be191f1888c016dc9f84186241cf17e4e785cb16a6bf2e"} Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.288635 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.291967 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.292007 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.301426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" event={"ID":"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c","Type":"ContainerStarted","Data":"32ceb26596245d5b57df5f7f99b64dfbddb267ce7418207bafd4e702d24e88fc"} Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.304059 4705 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rjxlz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.304094 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" podUID="e6cfdf35-7edc-48ac-b81b-45d7d57c7654" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.304497 4705 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-npxg4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.304518 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" podUID="03eae766-055e-4339-a21d-f594802d636c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.304563 4705 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-npxg4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.304576 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" podUID="03eae766-055e-4339-a21d-f594802d636c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.326246 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.326703 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:44.826683732 +0000 UTC m=+163.546557020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.435671 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.444403 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:44.944386857 +0000 UTC m=+163.664260245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.530757 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.537496 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.538778 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.038758462 +0000 UTC m=+163.758631750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.562299 4705 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-c8vf8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]log ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]etcd ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]etcd-readiness ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 07:43:44 crc kubenswrapper[4705]: [-]informer-sync failed: reason withheld Jan 24 07:43:44 crc kubenswrapper[4705]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]poststarthook/max-in-flight-filter ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-StartUserInformer ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-StartOAuthInformer ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Jan 24 07:43:44 crc kubenswrapper[4705]: [+]shutdown ok Jan 24 07:43:44 crc kubenswrapper[4705]: readyz check failed Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.562366 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" podUID="02af14b8-f5ac-4ce9-a001-8389192957e1" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.638985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.639536 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.13951917 +0000 UTC m=+163.859392458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.647381 4705 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rjxlz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.647645 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" podUID="e6cfdf35-7edc-48ac-b81b-45d7d57c7654" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.647378 4705 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rjxlz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.647858 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" podUID="e6cfdf35-7edc-48ac-b81b-45d7d57c7654" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.751484 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.751815 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.251798669 +0000 UTC m=+163.971671957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.834424 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.853427 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.854974 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.354953966 +0000 UTC m=+164.074827264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.897594 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h8gjk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.897644 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.898717 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pg499" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.899036 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h8gjk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.899192 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.915613 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:44 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:44 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:44 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.915944 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.954354 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.954507 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.454471248 +0000 UTC m=+164.174344536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:44 crc kubenswrapper[4705]: I0124 07:43:44.954534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:44 crc kubenswrapper[4705]: E0124 07:43:44.955078 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.455060165 +0000 UTC m=+164.174933463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.061017 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.062255 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.562218338 +0000 UTC m=+164.282091646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.163645 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.164121 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.664096348 +0000 UTC m=+164.383969636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.264951 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.265254 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.765238697 +0000 UTC m=+164.485111985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.325330 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-zfr4t container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.325396 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" podUID="100fdbf1-ca31-4a06-9f27-c3be6e08e887" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.411180 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.411628 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:45.911612277 +0000 UTC m=+164.631485565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.509295 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.510203 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.513862 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.514054 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6541401d-7cfb-4aa2-815c-560140a9caf0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.514155 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6541401d-7cfb-4aa2-815c-560140a9caf0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.514272 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.014254329 +0000 UTC m=+164.734127617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.523218 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.523617 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.559653 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-zfr4t container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.559710 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" podUID="100fdbf1-ca31-4a06-9f27-c3be6e08e887" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.570308 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.659053 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6541401d-7cfb-4aa2-815c-560140a9caf0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.659125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.659158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6541401d-7cfb-4aa2-815c-560140a9caf0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.659702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6541401d-7cfb-4aa2-815c-560140a9caf0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.660537 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.160512326 +0000 UTC m=+164.880385684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.666442 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zfr4t" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.789784 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.792425 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.292401459 +0000 UTC m=+165.012274757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.890014 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" event={"ID":"da4379ba-31cd-436c-b1a6-8e715c0d2dca","Type":"ContainerStarted","Data":"ee1a8586524a1467178aca761783c32fe24e9781ace14a6ed5f8dbe2c9030d6e"} Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.965837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:45 crc kubenswrapper[4705]: E0124 07:43:45.966330 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.466318271 +0000 UTC m=+165.186191559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.972353 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:45 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:45 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:45 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.972387 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:45 crc kubenswrapper[4705]: I0124 07:43:45.976378 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" event={"ID":"dfb189df-fd59-4ce7-9cf9-56966dab7850","Type":"ContainerStarted","Data":"6696f5ef4a63011e5a5ae69abea918c7c58ba39c3d57b7cd4d5c4d52f7bbc966"} Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.059782 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6541401d-7cfb-4aa2-815c-560140a9caf0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.071291 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.072008 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.57199265 +0000 UTC m=+165.291865938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.124055 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" event={"ID":"4cfa9c40-ad55-4bb3-b7ba-4325816a760d","Type":"ContainerStarted","Data":"ed939678b56b1a7868c32e82f114e64032b3261c2b3bed3c5d3e679345786587"} Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.126261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n7xmf" event={"ID":"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf","Type":"ContainerStarted","Data":"9cc1ae9a09dcbfe038c06abc106d698166b6b7a3bea1ee9f6454b458f51521d0"} Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.131165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" event={"ID":"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c","Type":"ContainerStarted","Data":"398a71793fc97d2da6f065dc14f2fe67acfa764c1927e9c064b053e53a697723"} Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.131979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" event={"ID":"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6","Type":"ContainerStarted","Data":"9955a8505e3f464402a5fec35a9d63a86f587547ac21689dc1a2c3c6a035441a"} Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.132905 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" event={"ID":"a84e98fc-8911-4fe1-8242-e906ccfdb277","Type":"ContainerStarted","Data":"52c871f5c49b31b7c66db5b7c6dd45a8cbdc883a98fd4b6a554b41b42c50e379"} Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.135760 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" event={"ID":"483649f3-68d0-467c-b4ff-dfbb2b3c340a","Type":"ContainerStarted","Data":"a236afff35e69752bd80d3d68c3d12956a8185e606826351d184c42cf18aa774"} Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.135803 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138285 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138370 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138440 4705 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rjxlz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138462 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" podUID="e6cfdf35-7edc-48ac-b81b-45d7d57c7654" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138513 4705 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-npxg4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138530 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" podUID="03eae766-055e-4339-a21d-f594802d636c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138614 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h8gjk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.138636 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.173019 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.175516 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.176670 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.676656481 +0000 UTC m=+165.396529769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.177987 4705 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-b58b6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.178041 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" podUID="483649f3-68d0-467c-b4ff-dfbb2b3c340a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.304573 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.307987 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.807954147 +0000 UTC m=+165.527827435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.308738 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.310396 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.810377317 +0000 UTC m=+165.530250605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.431424 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.431531 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.931508711 +0000 UTC m=+165.651381999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.431622 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.432027 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:46.932016855 +0000 UTC m=+165.651890143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.533354 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.534258 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.034240346 +0000 UTC m=+165.754113634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.664643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.664962 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.164949755 +0000 UTC m=+165.884823043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.765040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.765294 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.265278621 +0000 UTC m=+165.985151909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.869856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.870189 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.370177588 +0000 UTC m=+166.090050876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.959530 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:46 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:46 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:46 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.959602 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:46 crc kubenswrapper[4705]: I0124 07:43:46.974236 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:46 crc kubenswrapper[4705]: E0124 07:43:46.975217 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.475188428 +0000 UTC m=+166.195061716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.076445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.077083 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.577061108 +0000 UTC m=+166.296934396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.116737 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" podStartSLOduration=144.116720418 podStartE2EDuration="2m24.116720418s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:47.113694061 +0000 UTC m=+165.833567349" watchObservedRunningTime="2026-01-24 07:43:47.116720418 +0000 UTC m=+165.836593706" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.140607 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b854t" podStartSLOduration=144.140590514 podStartE2EDuration="2m24.140590514s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:47.139847923 +0000 UTC m=+165.859721231" watchObservedRunningTime="2026-01-24 07:43:47.140590514 +0000 UTC m=+165.860463802" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.152753 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" event={"ID":"6887bb61-9f22-4386-b263-866334b6529e","Type":"ContainerStarted","Data":"ea361d8bdf6db59e2e8dc72870e8648550f1a02b7c2dcd4efec4c58a41771d18"} Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.162710 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" event={"ID":"25819472-0561-4e4c-ad89-abfe02bc8484","Type":"ContainerStarted","Data":"9bbc2d44bb8d030fc45dd2cdc3efe8a4a3bd63b0f6c6ddba86a180414d14016e"} Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.180271 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" event={"ID":"6990313f-0f9b-4a82-b072-4f094768a28e","Type":"ContainerStarted","Data":"e43c1dc82ccaaf734667ef4b5ce2afb9ad044d81912617847755437a67da0dc9"} Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.186247 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.186801 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.686773973 +0000 UTC m=+166.406647261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.206004 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" event={"ID":"df8e8ead-4e67-4775-bce3-b48236e30573","Type":"ContainerStarted","Data":"628a66a50398257fbdb5101d74575b6123483641d800db6ec9c5e2dd866d648e"} Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.236573 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" podStartSLOduration=143.236549564 podStartE2EDuration="2m23.236549564s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:47.218471404 +0000 UTC m=+165.938344702" watchObservedRunningTime="2026-01-24 07:43:47.236549564 +0000 UTC m=+165.956422852" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.279478 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" event={"ID":"7a5ea77a-4a2f-4be0-81d3-9ecf84cfbe2c","Type":"ContainerStarted","Data":"8f803e63e3f01b0ef102d507e295d9a4868ed725d89263c9d997ea8bd17340f1"} Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.280788 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.290714 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.290984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.295993 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.795971734 +0000 UTC m=+166.515845022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.306272 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaa7a0f6-16ad-42c1-b1e2-6c080807fda1-metrics-certs\") pod \"network-metrics-daemon-mxnng\" (UID: \"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1\") " pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.323044 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" event={"ID":"37dfc0ce-5c63-4de8-8b92-08626d0ef9c6","Type":"ContainerStarted","Data":"a3f6375e151d7cb9b386785a08513cae33089281d708061fe1cbc810f5482f83"} Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.384058 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bnwdd" event={"ID":"697d22ea-60a0-44b5-a5fb-17c7c9caaadb","Type":"ContainerStarted","Data":"3813b496716f097d950a64b38d77944e6d1aef2fcc551f351f16e6e2ba438ee5"} Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.385222 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.385257 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.385302 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h8gjk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.385340 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.385479 4705 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-b58b6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.385506 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" podUID="483649f3-68d0-467c-b4ff-dfbb2b3c340a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.393078 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.394122 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.894099206 +0000 UTC m=+166.613972494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.394729 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mxnng" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.429336 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" podStartSLOduration=143.429319399 podStartE2EDuration="2m23.429319399s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:47.326706588 +0000 UTC m=+166.046579876" watchObservedRunningTime="2026-01-24 07:43:47.429319399 +0000 UTC m=+166.149192687" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.495757 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.496208 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:47.996190542 +0000 UTC m=+166.716063830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.500238 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4c8z4" podStartSLOduration=144.500220578 podStartE2EDuration="2m24.500220578s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:47.500114275 +0000 UTC m=+166.219987563" watchObservedRunningTime="2026-01-24 07:43:47.500220578 +0000 UTC m=+166.220093866" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.548140 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" podStartSLOduration=145.548124316 podStartE2EDuration="2m25.548124316s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:47.547281692 +0000 UTC m=+166.267154980" watchObservedRunningTime="2026-01-24 07:43:47.548124316 +0000 UTC m=+166.267997604" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.574235 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-npxg4" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.599121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.599452 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.099435162 +0000 UTC m=+166.819308450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.644091 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8vf8" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.701199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.701595 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.20158151 +0000 UTC m=+166.921454798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.806181 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.807322 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.307305351 +0000 UTC m=+167.027178639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.828073 4705 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bvnn7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]log ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]etcd ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/max-in-flight-filter ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 24 07:43:47 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 24 07:43:47 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectcache ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-startinformers ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 24 07:43:47 crc kubenswrapper[4705]: livez check failed Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.828143 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" podUID="0e7822fc-7419-4806-907b-b442a62f4baf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.852368 4705 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bvnn7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]log ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]etcd ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/max-in-flight-filter ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 24 07:43:47 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 24 07:43:47 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectcache ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-startinformers ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 24 07:43:47 crc kubenswrapper[4705]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 24 07:43:47 crc kubenswrapper[4705]: livez check failed Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.852471 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" podUID="0e7822fc-7419-4806-907b-b442a62f4baf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.900130 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:47 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:47 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:47 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.900260 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.909417 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:47 crc kubenswrapper[4705]: E0124 07:43:47.909981 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.409965674 +0000 UTC m=+167.129838962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.920451 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7czb5" podStartSLOduration=144.920416214 podStartE2EDuration="2m24.920416214s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:47.890476143 +0000 UTC m=+166.610349431" watchObservedRunningTime="2026-01-24 07:43:47.920416214 +0000 UTC m=+166.640289502" Jan 24 07:43:47 crc kubenswrapper[4705]: I0124 07:43:47.922925 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.010678 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:48 crc kubenswrapper[4705]: E0124 07:43:48.011445 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.511398841 +0000 UTC m=+167.231272129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.011592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:48 crc kubenswrapper[4705]: E0124 07:43:48.012226 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.512203184 +0000 UTC m=+167.232076472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.015604 4705 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.113364 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:48 crc kubenswrapper[4705]: E0124 07:43:48.113783 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.613757366 +0000 UTC m=+167.333630654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.245310 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:48 crc kubenswrapper[4705]: E0124 07:43:48.245767 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.745746212 +0000 UTC m=+167.465619500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-m7gzc" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.266412 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" podStartSLOduration=144.266384625 podStartE2EDuration="2m24.266384625s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:48.256943244 +0000 UTC m=+166.976816532" watchObservedRunningTime="2026-01-24 07:43:48.266384625 +0000 UTC m=+166.986257923" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.310210 4705 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-24T07:43:48.015667584Z","Handler":null,"Name":""} Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.353866 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:48 crc kubenswrapper[4705]: E0124 07:43:48.354426 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 07:43:48.854403667 +0000 UTC m=+167.574276955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.373452 4705 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.373526 4705 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.419782 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6541401d-7cfb-4aa2-815c-560140a9caf0","Type":"ContainerStarted","Data":"869ac7742922414e3f8a4249f6764135b01aedb6b5c764e67c37d9d0a94b6c12"} Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.435574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" event={"ID":"dfb189df-fd59-4ce7-9cf9-56966dab7850","Type":"ContainerStarted","Data":"a34e5652a71a2600a39895984a8c903dab6c01e709a659c379472d145a249a92"} Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.440018 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" event={"ID":"6990313f-0f9b-4a82-b072-4f094768a28e","Type":"ContainerStarted","Data":"fe7c8df28e5b7eab7a0e1898c4d72d03b83f31d55d7f1e3775b782cb159443e4"} Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.442879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" event={"ID":"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c","Type":"ContainerStarted","Data":"3646aa23f4494228eb0586784089c931a09449b6fe6fd414ab8c108049bbc89a"} Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.445709 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" event={"ID":"6887bb61-9f22-4386-b263-866334b6529e","Type":"ContainerStarted","Data":"1444cf98a3142ec10e413d9cfca52cfc34b51c9386e08ec1b6c8568e9a57f3c3"} Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.446623 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.469457 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.481545 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2dg4s" podStartSLOduration=147.481513133 podStartE2EDuration="2m27.481513133s" podCreationTimestamp="2026-01-24 07:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:48.365883587 +0000 UTC m=+167.085756875" watchObservedRunningTime="2026-01-24 07:43:48.481513133 +0000 UTC m=+167.201386431" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.490763 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.490797 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.627785 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-n7xmf" podStartSLOduration=145.627768309 podStartE2EDuration="2m25.627768309s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:48.540109148 +0000 UTC m=+167.259982446" watchObservedRunningTime="2026-01-24 07:43:48.627768309 +0000 UTC m=+167.347641587" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.750390 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7cvlf" podStartSLOduration=145.750373196 podStartE2EDuration="2m25.750373196s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:48.626719749 +0000 UTC m=+167.346593037" watchObservedRunningTime="2026-01-24 07:43:48.750373196 +0000 UTC m=+167.470246474" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.761703 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-m7gzc\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.811486 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-58kmq" podStartSLOduration=144.811464973 podStartE2EDuration="2m24.811464973s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:48.751454157 +0000 UTC m=+167.471327445" watchObservedRunningTime="2026-01-24 07:43:48.811464973 +0000 UTC m=+167.531338281" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.847528 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.860445 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.895460 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:48 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:48 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:48 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:48 crc kubenswrapper[4705]: I0124 07:43:48.895516 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.123239 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.184151 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pc548" podStartSLOduration=146.184131662 podStartE2EDuration="2m26.184131662s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:49.168879813 +0000 UTC m=+167.888753101" watchObservedRunningTime="2026-01-24 07:43:49.184131662 +0000 UTC m=+167.904004950" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.184504 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-m5twk" podStartSLOduration=146.184483222 podStartE2EDuration="2m26.184483222s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:49.01683838 +0000 UTC m=+167.736711668" watchObservedRunningTime="2026-01-24 07:43:49.184483222 +0000 UTC m=+167.904356510" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.335313 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9qgbp"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.337690 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.345286 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.388421 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qgbp"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.521805 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbr5f\" (UniqueName: \"kubernetes.io/projected/e1f81499-3c8f-40b6-bd99-344558565c77-kube-api-access-vbr5f\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.521860 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-utilities\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.521912 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-catalog-content\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.533442 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk2hm" podStartSLOduration=146.533418878 podStartE2EDuration="2m26.533418878s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:49.394256405 +0000 UTC m=+168.114129693" watchObservedRunningTime="2026-01-24 07:43:49.533418878 +0000 UTC m=+168.253292166" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.534654 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9ffcg"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.536104 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.578256 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gz2gs"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.583416 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.592083 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.623799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbr5f\" (UniqueName: \"kubernetes.io/projected/e1f81499-3c8f-40b6-bd99-344558565c77-kube-api-access-vbr5f\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.623876 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-utilities\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.623933 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-catalog-content\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.624571 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-catalog-content\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.625456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-utilities\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.626382 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2s5cd" podStartSLOduration=146.626365281 podStartE2EDuration="2m26.626365281s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:49.614041787 +0000 UTC m=+168.333915075" watchObservedRunningTime="2026-01-24 07:43:49.626365281 +0000 UTC m=+168.346238569" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.628390 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.629023 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" event={"ID":"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c","Type":"ContainerStarted","Data":"a407dcb4e05f85830f864433019c7793964eb25ba6600b22c50fd4447a4d0ed6"} Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.629058 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ffcg"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.629077 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mxnng"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.654842 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bjvr8" podStartSLOduration=146.65480866 podStartE2EDuration="2m26.65480866s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:49.644260646 +0000 UTC m=+168.364133944" watchObservedRunningTime="2026-01-24 07:43:49.65480866 +0000 UTC m=+168.374681948" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.690536 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" podStartSLOduration=145.690520197 podStartE2EDuration="2m25.690520197s" podCreationTimestamp="2026-01-24 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:49.689418755 +0000 UTC m=+168.409292043" watchObservedRunningTime="2026-01-24 07:43:49.690520197 +0000 UTC m=+168.410393475" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.711702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbr5f\" (UniqueName: \"kubernetes.io/projected/e1f81499-3c8f-40b6-bd99-344558565c77-kube-api-access-vbr5f\") pod \"community-operators-9qgbp\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.712668 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gz2gs"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.750666 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m45d\" (UniqueName: \"kubernetes.io/projected/7a408baf-8e2f-438d-b77f-2abd317fe09f-kube-api-access-8m45d\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.751047 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-catalog-content\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.751314 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-utilities\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.751343 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-catalog-content\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.751378 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-utilities\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.751429 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56g7f\" (UniqueName: \"kubernetes.io/projected/34532038-b143-4391-99f3-37275497f03e-kube-api-access-56g7f\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: W0124 07:43:49.805915 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaa7a0f6_16ad_42c1_b1e2_6c080807fda1.slice/crio-921c177f8fb314326778a53febabd4604226414fd76c6dde47f4192a377601d2 WatchSource:0}: Error finding container 921c177f8fb314326778a53febabd4604226414fd76c6dde47f4192a377601d2: Status 404 returned error can't find the container with id 921c177f8fb314326778a53febabd4604226414fd76c6dde47f4192a377601d2 Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.846268 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.944674 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t774k" podStartSLOduration=146.944652826 podStartE2EDuration="2m26.944652826s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:49.940297091 +0000 UTC m=+168.660170379" watchObservedRunningTime="2026-01-24 07:43:49.944652826 +0000 UTC m=+168.664526114" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.951995 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m45d\" (UniqueName: \"kubernetes.io/projected/7a408baf-8e2f-438d-b77f-2abd317fe09f-kube-api-access-8m45d\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.952046 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-catalog-content\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.952073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-utilities\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.952100 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-catalog-content\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.952129 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-utilities\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.952163 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56g7f\" (UniqueName: \"kubernetes.io/projected/34532038-b143-4391-99f3-37275497f03e-kube-api-access-56g7f\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.954417 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-utilities\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.954580 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-catalog-content\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.955018 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-utilities\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.955524 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-catalog-content\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.960808 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:49 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:49 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:49 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.960887 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.967559 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4k26f"] Jan 24 07:43:49 crc kubenswrapper[4705]: I0124 07:43:49.968638 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.020381 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m45d\" (UniqueName: \"kubernetes.io/projected/7a408baf-8e2f-438d-b77f-2abd317fe09f-kube-api-access-8m45d\") pod \"community-operators-gz2gs\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.039129 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k26f"] Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.054677 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-catalog-content\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.054756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhjt\" (UniqueName: \"kubernetes.io/projected/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-kube-api-access-vhhjt\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.054846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-utilities\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.067372 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bnwdd" podStartSLOduration=19.067346215 podStartE2EDuration="19.067346215s" podCreationTimestamp="2026-01-24 07:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:50.025933934 +0000 UTC m=+168.745807222" watchObservedRunningTime="2026-01-24 07:43:50.067346215 +0000 UTC m=+168.787219503" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.101572 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56g7f\" (UniqueName: \"kubernetes.io/projected/34532038-b143-4391-99f3-37275497f03e-kube-api-access-56g7f\") pod \"certified-operators-9ffcg\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.159871 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-catalog-content\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.160286 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhhjt\" (UniqueName: \"kubernetes.io/projected/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-kube-api-access-vhhjt\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.160341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-utilities\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.162090 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-utilities\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.163380 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-catalog-content\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.200212 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.222796 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.257962 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhhjt\" (UniqueName: \"kubernetes.io/projected/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-kube-api-access-vhhjt\") pod \"certified-operators-4k26f\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.473145 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.555176 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-m7gzc"] Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.662462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mxnng" event={"ID":"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1","Type":"ContainerStarted","Data":"921c177f8fb314326778a53febabd4604226414fd76c6dde47f4192a377601d2"} Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.682025 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6541401d-7cfb-4aa2-815c-560140a9caf0","Type":"ContainerStarted","Data":"01ae226291a0916f9c8e65627b016a8d433cdc0b5fcce4a419357e9a630f2512"} Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.688774 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" event={"ID":"e4e30be1-989b-4a5d-a33c-79c00184ce75","Type":"ContainerStarted","Data":"b2c16382b1d0cb3034d5a2d9254ca76a11123cfed2b717f529fc0487af5ab5ea"} Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.692167 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" event={"ID":"8590cbe2-0d5d-4cc8-8a2e-36547eff6d6c","Type":"ContainerStarted","Data":"4cbe3c6f46901ae585b7c5d7fef258acbe7c244d5f12e85a70840c9bdef89571"} Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.720772 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.720751238 podStartE2EDuration="5.720751238s" podCreationTimestamp="2026-01-24 07:43:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:50.719662376 +0000 UTC m=+169.439535664" watchObservedRunningTime="2026-01-24 07:43:50.720751238 +0000 UTC m=+169.440624516" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.760152 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gbdm8" podStartSLOduration=19.76013217 podStartE2EDuration="19.76013217s" podCreationTimestamp="2026-01-24 07:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:50.754460997 +0000 UTC m=+169.474334295" watchObservedRunningTime="2026-01-24 07:43:50.76013217 +0000 UTC m=+169.480005458" Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.770333 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qgbp"] Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.915869 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:50 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:50 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:50 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:50 crc kubenswrapper[4705]: I0124 07:43:50.916301 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.023490 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ffcg"] Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.130358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gz2gs"] Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.286773 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k26f"] Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.312748 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pdt8p"] Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.314273 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.319556 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.342736 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pdt8p"] Jan 24 07:43:51 crc kubenswrapper[4705]: W0124 07:43:51.348584 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod690b269f_3c5d_47b5_a11b_6c44dd6b1f95.slice/crio-2f8e6131d0792efed25a6b69d6b19f9f0fde735fdceaac46ec04392a9c66caf8 WatchSource:0}: Error finding container 2f8e6131d0792efed25a6b69d6b19f9f0fde735fdceaac46ec04392a9c66caf8: Status 404 returned error can't find the container with id 2f8e6131d0792efed25a6b69d6b19f9f0fde735fdceaac46ec04392a9c66caf8 Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.398743 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpldv\" (UniqueName: \"kubernetes.io/projected/a6b1d233-4df3-4960-abd3-c8bf11ca322b-kube-api-access-hpldv\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.398843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-utilities\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.398924 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-catalog-content\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.502405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpldv\" (UniqueName: \"kubernetes.io/projected/a6b1d233-4df3-4960-abd3-c8bf11ca322b-kube-api-access-hpldv\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.502469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-utilities\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.502523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-catalog-content\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.503198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-catalog-content\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.503286 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-utilities\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.536765 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpldv\" (UniqueName: \"kubernetes.io/projected/a6b1d233-4df3-4960-abd3-c8bf11ca322b-kube-api-access-hpldv\") pod \"redhat-marketplace-pdt8p\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.697494 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.711939 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mxnng" event={"ID":"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1","Type":"ContainerStarted","Data":"fc99b745784e352078c347fe673a24734250a6190f6725053eeaba515ed94068"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.711994 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mxnng" event={"ID":"aaa7a0f6-16ad-42c1-b1e2-6c080807fda1","Type":"ContainerStarted","Data":"7ee30c36b2361889af44beeb18d9b66c10c846201cc9e69ad554328109baf5a2"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.718475 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerStarted","Data":"cfdf48fc7acde935ee114fcdb4b233be79879019e64d6759e7288484e4bfea8c"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.718897 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerStarted","Data":"6ef71f1bc6cab50a9570168871c5866c2a6861cb83381cdd030118fdd14583bc"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.726019 4705 generic.go:334] "Generic (PLEG): container finished" podID="6541401d-7cfb-4aa2-815c-560140a9caf0" containerID="01ae226291a0916f9c8e65627b016a8d433cdc0b5fcce4a419357e9a630f2512" exitCode=0 Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.726097 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6541401d-7cfb-4aa2-815c-560140a9caf0","Type":"ContainerDied","Data":"01ae226291a0916f9c8e65627b016a8d433cdc0b5fcce4a419357e9a630f2512"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.729615 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmflx"] Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.731095 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.733362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" event={"ID":"e4e30be1-989b-4a5d-a33c-79c00184ce75","Type":"ContainerStarted","Data":"3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.734259 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.737238 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1f81499-3c8f-40b6-bd99-344558565c77" containerID="76a3a1df75a4129552f10a18ef9d7ec5c1c1cdb8c394944962c348f59e0c40a9" exitCode=0 Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.737287 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qgbp" event={"ID":"e1f81499-3c8f-40b6-bd99-344558565c77","Type":"ContainerDied","Data":"76a3a1df75a4129552f10a18ef9d7ec5c1c1cdb8c394944962c348f59e0c40a9"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.737306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qgbp" event={"ID":"e1f81499-3c8f-40b6-bd99-344558565c77","Type":"ContainerStarted","Data":"cf0fe8a76ecf068e111eea51590bb8c21095ffcb455c4482ca530a448d1fff78"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.739198 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.746283 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k26f" event={"ID":"690b269f-3c5d-47b5-a11b-6c44dd6b1f95","Type":"ContainerStarted","Data":"2f8e6131d0792efed25a6b69d6b19f9f0fde735fdceaac46ec04392a9c66caf8"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.748602 4705 generic.go:334] "Generic (PLEG): container finished" podID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerID="fa2e50dd5dd9061c98e738a23e27571fb8e4199780677a9a3bb7f7cd36e0f88d" exitCode=0 Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.749708 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz2gs" event={"ID":"7a408baf-8e2f-438d-b77f-2abd317fe09f","Type":"ContainerDied","Data":"fa2e50dd5dd9061c98e738a23e27571fb8e4199780677a9a3bb7f7cd36e0f88d"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.749753 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz2gs" event={"ID":"7a408baf-8e2f-438d-b77f-2abd317fe09f","Type":"ContainerStarted","Data":"166512c0f8e629d187dc072e15f26656604ca7c5e311fcc49195ab97bf4b0354"} Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.750417 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmflx"] Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.888644 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mxnng" podStartSLOduration=149.888628318 podStartE2EDuration="2m29.888628318s" podCreationTimestamp="2026-01-24 07:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:51.887125195 +0000 UTC m=+170.606998503" watchObservedRunningTime="2026-01-24 07:43:51.888628318 +0000 UTC m=+170.608501606" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.903955 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:51 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:51 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:51 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.904016 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.922803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-utilities\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.922887 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqm7x\" (UniqueName: \"kubernetes.io/projected/74ac561c-5afe-4308-814f-11bf3f93f4ac-kube-api-access-nqm7x\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.923124 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-catalog-content\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:51 crc kubenswrapper[4705]: I0124 07:43:51.941188 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" podStartSLOduration=148.941169559 podStartE2EDuration="2m28.941169559s" podCreationTimestamp="2026-01-24 07:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:43:51.935603229 +0000 UTC m=+170.655476517" watchObservedRunningTime="2026-01-24 07:43:51.941169559 +0000 UTC m=+170.661042887" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.030073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-catalog-content\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.030146 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-utilities\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.030169 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqm7x\" (UniqueName: \"kubernetes.io/projected/74ac561c-5afe-4308-814f-11bf3f93f4ac-kube-api-access-nqm7x\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.030932 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-catalog-content\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.030968 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-utilities\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.136609 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqm7x\" (UniqueName: \"kubernetes.io/projected/74ac561c-5afe-4308-814f-11bf3f93f4ac-kube-api-access-nqm7x\") pod \"redhat-marketplace-qmflx\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.308322 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-64v8j"] Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.312628 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.315138 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.326457 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64v8j"] Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.374175 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.381901 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bvnn7" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.448552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-utilities\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.448641 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-catalog-content\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.448697 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf96d\" (UniqueName: \"kubernetes.io/projected/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-kube-api-access-mf96d\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.468217 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bnwdd" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.508738 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.552739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf96d\" (UniqueName: \"kubernetes.io/projected/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-kube-api-access-mf96d\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.552927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-utilities\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.553074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-catalog-content\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.556050 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-catalog-content\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.556066 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-utilities\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.619255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf96d\" (UniqueName: \"kubernetes.io/projected/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-kube-api-access-mf96d\") pod \"redhat-operators-64v8j\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.703771 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7lfr8"] Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.704990 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.721100 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lfr8"] Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.818883 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-catalog-content\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.819232 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfv5s\" (UniqueName: \"kubernetes.io/projected/db373164-2a89-4bd8-803b-3a3c4554846c-kube-api-access-zfv5s\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.819338 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-utilities\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.820176 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.897770 4705 generic.go:334] "Generic (PLEG): container finished" podID="da4379ba-31cd-436c-b1a6-8e715c0d2dca" containerID="ee1a8586524a1467178aca761783c32fe24e9781ace14a6ed5f8dbe2c9030d6e" exitCode=0 Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.897845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" event={"ID":"da4379ba-31cd-436c-b1a6-8e715c0d2dca","Type":"ContainerDied","Data":"ee1a8586524a1467178aca761783c32fe24e9781ace14a6ed5f8dbe2c9030d6e"} Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.900062 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:52 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:52 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:52 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.900094 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.915606 4705 generic.go:334] "Generic (PLEG): container finished" podID="34532038-b143-4391-99f3-37275497f03e" containerID="cfdf48fc7acde935ee114fcdb4b233be79879019e64d6759e7288484e4bfea8c" exitCode=0 Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.915681 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerDied","Data":"cfdf48fc7acde935ee114fcdb4b233be79879019e64d6759e7288484e4bfea8c"} Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.918517 4705 generic.go:334] "Generic (PLEG): container finished" podID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerID="5605285eb04c617456491792796c25163cd88b1d7558e4f4beeeb5479caa3b2c" exitCode=0 Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.919860 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-catalog-content\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.919894 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfv5s\" (UniqueName: \"kubernetes.io/projected/db373164-2a89-4bd8-803b-3a3c4554846c-kube-api-access-zfv5s\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.919913 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-utilities\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:52 crc kubenswrapper[4705]: I0124 07:43:52.934656 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k26f" event={"ID":"690b269f-3c5d-47b5-a11b-6c44dd6b1f95","Type":"ContainerDied","Data":"5605285eb04c617456491792796c25163cd88b1d7558e4f4beeeb5479caa3b2c"} Jan 24 07:43:53 crc kubenswrapper[4705]: I0124 07:43:53.005021 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-utilities\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:53 crc kubenswrapper[4705]: I0124 07:43:53.005060 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-catalog-content\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:53 crc kubenswrapper[4705]: I0124 07:43:53.227256 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfv5s\" (UniqueName: \"kubernetes.io/projected/db373164-2a89-4bd8-803b-3a3c4554846c-kube-api-access-zfv5s\") pod \"redhat-operators-7lfr8\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:53 crc kubenswrapper[4705]: I0124 07:43:53.325205 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:43:53 crc kubenswrapper[4705]: I0124 07:43:53.909249 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:53 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:53 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:53 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:53 crc kubenswrapper[4705]: I0124 07:43:53.909543 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.072381 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.072447 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.073070 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.073093 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.672647 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.672914 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.673687 4705 patch_prober.go:28] interesting pod/console-f9d7485db-n7xmf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.673716 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n7xmf" podUID="fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.692233 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rjxlz" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.696526 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b58b6" Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.723570 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pdt8p"] Jan 24 07:43:54 crc kubenswrapper[4705]: I0124 07:43:54.989509 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.052050 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdt8p" event={"ID":"a6b1d233-4df3-4960-abd3-c8bf11ca322b","Type":"ContainerStarted","Data":"829302d55b953fff4568d337610ad7902bf612cef54d96cd52f9e9c8cb7c4bf7"} Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.180157 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:55 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:55 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:55 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.180517 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.530370 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmflx"] Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.650232 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.704273 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.716630 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d77ct\" (UniqueName: \"kubernetes.io/projected/da4379ba-31cd-436c-b1a6-8e715c0d2dca-kube-api-access-d77ct\") pod \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.716832 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da4379ba-31cd-436c-b1a6-8e715c0d2dca-config-volume\") pod \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.716960 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da4379ba-31cd-436c-b1a6-8e715c0d2dca-secret-volume\") pod \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\" (UID: \"da4379ba-31cd-436c-b1a6-8e715c0d2dca\") " Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.716992 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6541401d-7cfb-4aa2-815c-560140a9caf0-kubelet-dir\") pod \"6541401d-7cfb-4aa2-815c-560140a9caf0\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.717018 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6541401d-7cfb-4aa2-815c-560140a9caf0-kube-api-access\") pod \"6541401d-7cfb-4aa2-815c-560140a9caf0\" (UID: \"6541401d-7cfb-4aa2-815c-560140a9caf0\") " Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.719707 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6541401d-7cfb-4aa2-815c-560140a9caf0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6541401d-7cfb-4aa2-815c-560140a9caf0" (UID: "6541401d-7cfb-4aa2-815c-560140a9caf0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.719934 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4379ba-31cd-436c-b1a6-8e715c0d2dca-config-volume" (OuterVolumeSpecName: "config-volume") pod "da4379ba-31cd-436c-b1a6-8e715c0d2dca" (UID: "da4379ba-31cd-436c-b1a6-8e715c0d2dca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.732728 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4379ba-31cd-436c-b1a6-8e715c0d2dca-kube-api-access-d77ct" (OuterVolumeSpecName: "kube-api-access-d77ct") pod "da4379ba-31cd-436c-b1a6-8e715c0d2dca" (UID: "da4379ba-31cd-436c-b1a6-8e715c0d2dca"). InnerVolumeSpecName "kube-api-access-d77ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.733880 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da4379ba-31cd-436c-b1a6-8e715c0d2dca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da4379ba-31cd-436c-b1a6-8e715c0d2dca" (UID: "da4379ba-31cd-436c-b1a6-8e715c0d2dca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.761678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6541401d-7cfb-4aa2-815c-560140a9caf0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6541401d-7cfb-4aa2-815c-560140a9caf0" (UID: "6541401d-7cfb-4aa2-815c-560140a9caf0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.822450 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d77ct\" (UniqueName: \"kubernetes.io/projected/da4379ba-31cd-436c-b1a6-8e715c0d2dca-kube-api-access-d77ct\") on node \"crc\" DevicePath \"\"" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.822719 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da4379ba-31cd-436c-b1a6-8e715c0d2dca-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.822729 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da4379ba-31cd-436c-b1a6-8e715c0d2dca-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.822740 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6541401d-7cfb-4aa2-815c-560140a9caf0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.822748 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6541401d-7cfb-4aa2-815c-560140a9caf0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.912075 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:55 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:55 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:55 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.912128 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.930861 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lfr8"] Jan 24 07:43:55 crc kubenswrapper[4705]: I0124 07:43:55.995592 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64v8j"] Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.107611 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.107649 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl" event={"ID":"da4379ba-31cd-436c-b1a6-8e715c0d2dca","Type":"ContainerDied","Data":"5a53b0ae12e7ea394584ad250c9267bde4481c79c7e811dd3bdf39dc1e755ae8"} Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.107712 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a53b0ae12e7ea394584ad250c9267bde4481c79c7e811dd3bdf39dc1e755ae8" Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.111922 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6541401d-7cfb-4aa2-815c-560140a9caf0","Type":"ContainerDied","Data":"869ac7742922414e3f8a4249f6764135b01aedb6b5c764e67c37d9d0a94b6c12"} Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.111973 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="869ac7742922414e3f8a4249f6764135b01aedb6b5c764e67c37d9d0a94b6c12" Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.112037 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.167369 4705 generic.go:334] "Generic (PLEG): container finished" podID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerID="5d1b64cfb3a4e012c3e0002ddf8ab321364abb7b1c649b570310b2cfe0c1b05d" exitCode=0 Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.177254 4705 generic.go:334] "Generic (PLEG): container finished" podID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerID="20044c78fdb33d68e095188e7d16475c95b9503974eeadfc3afd8f4954bd5162" exitCode=0 Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.167490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdt8p" event={"ID":"a6b1d233-4df3-4960-abd3-c8bf11ca322b","Type":"ContainerDied","Data":"5d1b64cfb3a4e012c3e0002ddf8ab321364abb7b1c649b570310b2cfe0c1b05d"} Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.192957 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmflx" event={"ID":"74ac561c-5afe-4308-814f-11bf3f93f4ac","Type":"ContainerDied","Data":"20044c78fdb33d68e095188e7d16475c95b9503974eeadfc3afd8f4954bd5162"} Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.192986 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmflx" event={"ID":"74ac561c-5afe-4308-814f-11bf3f93f4ac","Type":"ContainerStarted","Data":"b79894bc8bace6e7dede67395fe723e88885a925c7e8756dd667b76b48b077d2"} Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.192998 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64v8j" event={"ID":"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2","Type":"ContainerStarted","Data":"cd3d462defd5da412a88d906c17334d8aa32a0981cbdb0f8ab8920505cb17ccb"} Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.193009 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lfr8" event={"ID":"db373164-2a89-4bd8-803b-3a3c4554846c","Type":"ContainerStarted","Data":"e162568b692f4b8871efa8baef87d0d2f18cb980e56e2b9608f268be97549d39"} Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.899329 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:56 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:56 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:56 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:56 crc kubenswrapper[4705]: I0124 07:43:56.899943 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.194473 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 07:43:57 crc kubenswrapper[4705]: E0124 07:43:57.203320 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6541401d-7cfb-4aa2-815c-560140a9caf0" containerName="pruner" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.203348 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6541401d-7cfb-4aa2-815c-560140a9caf0" containerName="pruner" Jan 24 07:43:57 crc kubenswrapper[4705]: E0124 07:43:57.203367 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4379ba-31cd-436c-b1a6-8e715c0d2dca" containerName="collect-profiles" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.203384 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4379ba-31cd-436c-b1a6-8e715c0d2dca" containerName="collect-profiles" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.203529 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6541401d-7cfb-4aa2-815c-560140a9caf0" containerName="pruner" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.203548 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4379ba-31cd-436c-b1a6-8e715c0d2dca" containerName="collect-profiles" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.204236 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.211551 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.211804 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.220615 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.302100 4705 generic.go:334] "Generic (PLEG): container finished" podID="db373164-2a89-4bd8-803b-3a3c4554846c" containerID="94567fa029df50a2a1a0d69037ecd285eab9a217eaace19bb12109a7f107b7e5" exitCode=0 Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.302196 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lfr8" event={"ID":"db373164-2a89-4bd8-803b-3a3c4554846c","Type":"ContainerDied","Data":"94567fa029df50a2a1a0d69037ecd285eab9a217eaace19bb12109a7f107b7e5"} Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.324443 4705 generic.go:334] "Generic (PLEG): container finished" podID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerID="e79e9606b191160959b6120dc23f05dc616ca99b33403d90e9f9f5ea388c8afe" exitCode=0 Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.324532 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64v8j" event={"ID":"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2","Type":"ContainerDied","Data":"e79e9606b191160959b6120dc23f05dc616ca99b33403d90e9f9f5ea388c8afe"} Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.393484 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b890308d-34d4-4bf2-9830-7d9da6fdd136-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.393592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b890308d-34d4-4bf2-9830-7d9da6fdd136-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.552560 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b890308d-34d4-4bf2-9830-7d9da6fdd136-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.552634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b890308d-34d4-4bf2-9830-7d9da6fdd136-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.552706 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b890308d-34d4-4bf2-9830-7d9da6fdd136-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.638978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b890308d-34d4-4bf2-9830-7d9da6fdd136-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.857528 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.925442 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:57 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:57 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:57 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:57 crc kubenswrapper[4705]: I0124 07:43:57.925554 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:58 crc kubenswrapper[4705]: I0124 07:43:58.849250 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 07:43:58 crc kubenswrapper[4705]: I0124 07:43:58.895422 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:58 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:58 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:58 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:58 crc kubenswrapper[4705]: I0124 07:43:58.895495 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:43:59 crc kubenswrapper[4705]: I0124 07:43:59.681679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b890308d-34d4-4bf2-9830-7d9da6fdd136","Type":"ContainerStarted","Data":"dd7e97dc7b9c9131abdb374e1f3388b1c46794870f0f447d56a8513e40775a48"} Jan 24 07:43:59 crc kubenswrapper[4705]: I0124 07:43:59.896270 4705 patch_prober.go:28] interesting pod/router-default-5444994796-rfsmt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 07:43:59 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Jan 24 07:43:59 crc kubenswrapper[4705]: [+]process-running ok Jan 24 07:43:59 crc kubenswrapper[4705]: healthz check failed Jan 24 07:43:59 crc kubenswrapper[4705]: I0124 07:43:59.896332 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfsmt" podUID="a5317443-5085-4cd9-b3fb-6b8282746932" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 07:44:00 crc kubenswrapper[4705]: I0124 07:44:00.637906 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b890308d-34d4-4bf2-9830-7d9da6fdd136","Type":"ContainerStarted","Data":"99efd51f93fc4d0ff10a0e6100ee0f879a2aacb7320426c152426f89e0c9c740"} Jan 24 07:44:00 crc kubenswrapper[4705]: I0124 07:44:00.661358 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.661340266 podStartE2EDuration="3.661340266s" podCreationTimestamp="2026-01-24 07:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:44:00.658067792 +0000 UTC m=+179.377941080" watchObservedRunningTime="2026-01-24 07:44:00.661340266 +0000 UTC m=+179.381213554" Jan 24 07:44:00 crc kubenswrapper[4705]: I0124 07:44:00.901308 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:44:00 crc kubenswrapper[4705]: I0124 07:44:00.905961 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rfsmt" Jan 24 07:44:01 crc kubenswrapper[4705]: I0124 07:44:01.416862 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 07:44:03 crc kubenswrapper[4705]: I0124 07:44:03.823502 4705 generic.go:334] "Generic (PLEG): container finished" podID="b890308d-34d4-4bf2-9830-7d9da6fdd136" containerID="99efd51f93fc4d0ff10a0e6100ee0f879a2aacb7320426c152426f89e0c9c740" exitCode=0 Jan 24 07:44:03 crc kubenswrapper[4705]: I0124 07:44:03.824040 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b890308d-34d4-4bf2-9830-7d9da6fdd136","Type":"ContainerDied","Data":"99efd51f93fc4d0ff10a0e6100ee0f879a2aacb7320426c152426f89e0c9c740"} Jan 24 07:44:04 crc kubenswrapper[4705]: I0124 07:44:04.072672 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:04 crc kubenswrapper[4705]: I0124 07:44:04.072751 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:04 crc kubenswrapper[4705]: I0124 07:44:04.072760 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:04 crc kubenswrapper[4705]: I0124 07:44:04.072835 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:04 crc kubenswrapper[4705]: I0124 07:44:04.664460 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:44:04 crc kubenswrapper[4705]: I0124 07:44:04.673380 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:44:05 crc kubenswrapper[4705]: I0124 07:44:05.141356 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h4w4g"] Jan 24 07:44:05 crc kubenswrapper[4705]: I0124 07:44:05.153011 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" containerID="cri-o://f03490bb6a056b13d32f29c4fd53f96da500a5285f84f272abb1770cb37e8fe0" gracePeriod=30 Jan 24 07:44:05 crc kubenswrapper[4705]: I0124 07:44:05.185813 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm"] Jan 24 07:44:05 crc kubenswrapper[4705]: I0124 07:44:05.188703 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" containerID="cri-o://1d03f5180351561f6578c4f3db7a550b571fecdd6370cddde1c870694f1544cc" gracePeriod=30 Jan 24 07:44:06 crc kubenswrapper[4705]: I0124 07:44:06.802886 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:44:06 crc kubenswrapper[4705]: I0124 07:44:06.903247 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b890308d-34d4-4bf2-9830-7d9da6fdd136-kubelet-dir\") pod \"b890308d-34d4-4bf2-9830-7d9da6fdd136\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " Jan 24 07:44:06 crc kubenswrapper[4705]: I0124 07:44:06.903307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b890308d-34d4-4bf2-9830-7d9da6fdd136-kube-api-access\") pod \"b890308d-34d4-4bf2-9830-7d9da6fdd136\" (UID: \"b890308d-34d4-4bf2-9830-7d9da6fdd136\") " Jan 24 07:44:06 crc kubenswrapper[4705]: I0124 07:44:06.904901 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b890308d-34d4-4bf2-9830-7d9da6fdd136-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b890308d-34d4-4bf2-9830-7d9da6fdd136" (UID: "b890308d-34d4-4bf2-9830-7d9da6fdd136"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:44:06 crc kubenswrapper[4705]: I0124 07:44:06.913847 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b890308d-34d4-4bf2-9830-7d9da6fdd136-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b890308d-34d4-4bf2-9830-7d9da6fdd136" (UID: "b890308d-34d4-4bf2-9830-7d9da6fdd136"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:07 crc kubenswrapper[4705]: I0124 07:44:07.110241 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b890308d-34d4-4bf2-9830-7d9da6fdd136-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:07 crc kubenswrapper[4705]: I0124 07:44:07.110289 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b890308d-34d4-4bf2-9830-7d9da6fdd136-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:07 crc kubenswrapper[4705]: I0124 07:44:07.111097 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:44:07 crc kubenswrapper[4705]: I0124 07:44:07.111174 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:44:07 crc kubenswrapper[4705]: I0124 07:44:07.133279 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b890308d-34d4-4bf2-9830-7d9da6fdd136","Type":"ContainerDied","Data":"dd7e97dc7b9c9131abdb374e1f3388b1c46794870f0f447d56a8513e40775a48"} Jan 24 07:44:07 crc kubenswrapper[4705]: I0124 07:44:07.133322 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7e97dc7b9c9131abdb374e1f3388b1c46794870f0f447d56a8513e40775a48" Jan 24 07:44:07 crc kubenswrapper[4705]: I0124 07:44:07.133368 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 07:44:08 crc kubenswrapper[4705]: I0124 07:44:08.192873 4705 generic.go:334] "Generic (PLEG): container finished" podID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerID="1d03f5180351561f6578c4f3db7a550b571fecdd6370cddde1c870694f1544cc" exitCode=0 Jan 24 07:44:08 crc kubenswrapper[4705]: I0124 07:44:08.193145 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" event={"ID":"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77","Type":"ContainerDied","Data":"1d03f5180351561f6578c4f3db7a550b571fecdd6370cddde1c870694f1544cc"} Jan 24 07:44:08 crc kubenswrapper[4705]: I0124 07:44:08.194944 4705 generic.go:334] "Generic (PLEG): container finished" podID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerID="f03490bb6a056b13d32f29c4fd53f96da500a5285f84f272abb1770cb37e8fe0" exitCode=0 Jan 24 07:44:08 crc kubenswrapper[4705]: I0124 07:44:08.194970 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" event={"ID":"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5","Type":"ContainerDied","Data":"f03490bb6a056b13d32f29c4fd53f96da500a5285f84f272abb1770cb37e8fe0"} Jan 24 07:44:08 crc kubenswrapper[4705]: I0124 07:44:08.866759 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:44:13 crc kubenswrapper[4705]: I0124 07:44:13.427628 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-h4w4g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:44:13 crc kubenswrapper[4705]: I0124 07:44:13.427703 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 07:44:13 crc kubenswrapper[4705]: I0124 07:44:13.684078 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-27hpm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:44:13 crc kubenswrapper[4705]: I0124 07:44:13.685306 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.072893 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.072684 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.072964 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.073003 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.073036 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.074202 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b5c5dcf2f51f8939a8be191f1888c016dc9f84186241cf17e4e785cb16a6bf2e"} pod="openshift-console/downloads-7954f5f757-7czb5" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.074339 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" containerID="cri-o://b5c5dcf2f51f8939a8be191f1888c016dc9f84186241cf17e4e785cb16a6bf2e" gracePeriod=2 Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.074668 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:14 crc kubenswrapper[4705]: I0124 07:44:14.074691 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:15 crc kubenswrapper[4705]: I0124 07:44:15.813098 4705 generic.go:334] "Generic (PLEG): container finished" podID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerID="b5c5dcf2f51f8939a8be191f1888c016dc9f84186241cf17e4e785cb16a6bf2e" exitCode=0 Jan 24 07:44:15 crc kubenswrapper[4705]: I0124 07:44:15.813186 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7czb5" event={"ID":"c30fd97b-0555-479c-969f-4148e7bfb66d","Type":"ContainerDied","Data":"b5c5dcf2f51f8939a8be191f1888c016dc9f84186241cf17e4e785cb16a6bf2e"} Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.933158 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.976283 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d"] Jan 24 07:44:22 crc kubenswrapper[4705]: E0124 07:44:22.976944 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.976962 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" Jan 24 07:44:22 crc kubenswrapper[4705]: E0124 07:44:22.976978 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b890308d-34d4-4bf2-9830-7d9da6fdd136" containerName="pruner" Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.976985 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b890308d-34d4-4bf2-9830-7d9da6fdd136" containerName="pruner" Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.977093 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.977103 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b890308d-34d4-4bf2-9830-7d9da6fdd136" containerName="pruner" Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.977563 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:22 crc kubenswrapper[4705]: I0124 07:44:22.992890 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d"] Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.048636 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-config\") pod \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.048720 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-serving-cert\") pod \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.048833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkwws\" (UniqueName: \"kubernetes.io/projected/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-kube-api-access-zkwws\") pod \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.048884 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-client-ca\") pod \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\" (UID: \"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77\") " Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.049742 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-config" (OuterVolumeSpecName: "config") pod "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" (UID: "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.049868 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" (UID: "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.122477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" (UID: "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.124293 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-kube-api-access-zkwws" (OuterVolumeSpecName: "kube-api-access-zkwws") pod "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" (UID: "c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77"). InnerVolumeSpecName "kube-api-access-zkwws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.150889 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-config\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.150946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-client-ca\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.150989 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754nf\" (UniqueName: \"kubernetes.io/projected/2102b546-58f9-4568-9333-9355cbfcc9fd-kube-api-access-754nf\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.151020 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2102b546-58f9-4568-9333-9355cbfcc9fd-serving-cert\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.151061 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkwws\" (UniqueName: \"kubernetes.io/projected/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-kube-api-access-zkwws\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.151114 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.151156 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.151170 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.252282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2102b546-58f9-4568-9333-9355cbfcc9fd-serving-cert\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.252374 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-config\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.252405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-client-ca\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.252457 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754nf\" (UniqueName: \"kubernetes.io/projected/2102b546-58f9-4568-9333-9355cbfcc9fd-kube-api-access-754nf\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.253518 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-client-ca\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.254914 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-config\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.255687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2102b546-58f9-4568-9333-9355cbfcc9fd-serving-cert\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.277549 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754nf\" (UniqueName: \"kubernetes.io/projected/2102b546-58f9-4568-9333-9355cbfcc9fd-kube-api-access-754nf\") pod \"route-controller-manager-7764784f6c-pzx8d\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.423857 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.427222 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-h4w4g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.427260 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.634317 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-27hpm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.634751 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.868181 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" event={"ID":"c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77","Type":"ContainerDied","Data":"df8be331ddf2318f6b38440521d63cf012f10b0c87d30b1db792eb9c3470216c"} Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.868249 4705 scope.go:117] "RemoveContainer" containerID="1d03f5180351561f6578c4f3db7a550b571fecdd6370cddde1c870694f1544cc" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.868247 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm" Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.905478 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm"] Jan 24 07:44:23 crc kubenswrapper[4705]: I0124 07:44:23.909102 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-27hpm"] Jan 24 07:44:24 crc kubenswrapper[4705]: I0124 07:44:24.082389 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:24 crc kubenswrapper[4705]: I0124 07:44:24.082482 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:24 crc kubenswrapper[4705]: I0124 07:44:24.529327 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xc9ml" Jan 24 07:44:25 crc kubenswrapper[4705]: I0124 07:44:25.347154 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d"] Jan 24 07:44:25 crc kubenswrapper[4705]: I0124 07:44:25.659462 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77" path="/var/lib/kubelet/pods/c7d26bc2-a5bd-4680-8bd9-d2ac0bc6ec77/volumes" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.447971 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.480605 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66454fdb65-ccv75"] Jan 24 07:44:30 crc kubenswrapper[4705]: E0124 07:44:30.480917 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.480932 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.481084 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" containerName="controller-manager" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.481546 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.491655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66454fdb65-ccv75"] Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.551698 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-config\") pod \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.551742 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-proxy-ca-bundles\") pod \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.551777 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-serving-cert\") pod \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.551901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x68nd\" (UniqueName: \"kubernetes.io/projected/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-kube-api-access-x68nd\") pod \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.551969 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-client-ca\") pod \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\" (UID: \"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5\") " Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.552660 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" (UID: "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.552691 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" (UID: "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.552724 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-config" (OuterVolumeSpecName: "config") pod "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" (UID: "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.654208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-client-ca\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.654748 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-config\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.654866 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-proxy-ca-bundles\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.654904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afdc725-8d46-415e-8f17-766ca00acc1e-serving-cert\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.655028 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6dl\" (UniqueName: \"kubernetes.io/projected/1afdc725-8d46-415e-8f17-766ca00acc1e-kube-api-access-xz6dl\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.655173 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.655211 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.655230 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.720332 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-kube-api-access-x68nd" (OuterVolumeSpecName: "kube-api-access-x68nd") pod "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" (UID: "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5"). InnerVolumeSpecName "kube-api-access-x68nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.720480 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" (UID: "774c5a77-4cf4-4f66-8b1e-b44e791ff1e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.755910 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-config\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.756002 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-proxy-ca-bundles\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.756081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afdc725-8d46-415e-8f17-766ca00acc1e-serving-cert\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.756194 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz6dl\" (UniqueName: \"kubernetes.io/projected/1afdc725-8d46-415e-8f17-766ca00acc1e-kube-api-access-xz6dl\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.756247 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-client-ca\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.756328 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.756353 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x68nd\" (UniqueName: \"kubernetes.io/projected/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5-kube-api-access-x68nd\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.757610 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-client-ca\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.758254 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-proxy-ca-bundles\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.758248 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-config\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.763164 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afdc725-8d46-415e-8f17-766ca00acc1e-serving-cert\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:30 crc kubenswrapper[4705]: I0124 07:44:30.824396 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz6dl\" (UniqueName: \"kubernetes.io/projected/1afdc725-8d46-415e-8f17-766ca00acc1e-kube-api-access-xz6dl\") pod \"controller-manager-66454fdb65-ccv75\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:31 crc kubenswrapper[4705]: I0124 07:44:31.118075 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:31 crc kubenswrapper[4705]: I0124 07:44:31.148092 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" event={"ID":"774c5a77-4cf4-4f66-8b1e-b44e791ff1e5","Type":"ContainerDied","Data":"890bbdfadadec3399cb7bb5649c353a165fa59fb34bb75d3f32e8adb51b63a3f"} Jan 24 07:44:31 crc kubenswrapper[4705]: I0124 07:44:31.148153 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h4w4g" Jan 24 07:44:31 crc kubenswrapper[4705]: I0124 07:44:31.272599 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h4w4g"] Jan 24 07:44:31 crc kubenswrapper[4705]: I0124 07:44:31.276114 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h4w4g"] Jan 24 07:44:31 crc kubenswrapper[4705]: I0124 07:44:31.585904 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774c5a77-4cf4-4f66-8b1e-b44e791ff1e5" path="/var/lib/kubelet/pods/774c5a77-4cf4-4f66-8b1e-b44e791ff1e5/volumes" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.646365 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.647339 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.649094 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.649251 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.659094 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.735217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.735321 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.838947 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.839356 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.839468 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.883961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:33 crc kubenswrapper[4705]: I0124 07:44:33.970152 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:34 crc kubenswrapper[4705]: I0124 07:44:34.072548 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:34 crc kubenswrapper[4705]: I0124 07:44:34.072629 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:35 crc kubenswrapper[4705]: E0124 07:44:35.444803 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 24 07:44:35 crc kubenswrapper[4705]: E0124 07:44:35.445229 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vbr5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9qgbp_openshift-marketplace(e1f81499-3c8f-40b6-bd99-344558565c77): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:35 crc kubenswrapper[4705]: E0124 07:44:35.446373 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9qgbp" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" Jan 24 07:44:37 crc kubenswrapper[4705]: I0124 07:44:37.070802 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:44:37 crc kubenswrapper[4705]: I0124 07:44:37.071338 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:44:37 crc kubenswrapper[4705]: I0124 07:44:37.071378 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:44:37 crc kubenswrapper[4705]: I0124 07:44:37.071894 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:44:37 crc kubenswrapper[4705]: I0124 07:44:37.071939 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8" gracePeriod=600 Jan 24 07:44:37 crc kubenswrapper[4705]: I0124 07:44:37.321588 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8" exitCode=0 Jan 24 07:44:37 crc kubenswrapper[4705]: I0124 07:44:37.321638 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8"} Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.043883 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.045298 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.057320 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.118351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-kubelet-dir\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.118411 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63fd1206-c28b-4e05-94ab-8935afb05436-kube-api-access\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.118440 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-var-lock\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.219951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-kubelet-dir\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.220095 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63fd1206-c28b-4e05-94ab-8935afb05436-kube-api-access\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.220035 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-kubelet-dir\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.220178 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-var-lock\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.220397 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-var-lock\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.239954 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63fd1206-c28b-4e05-94ab-8935afb05436-kube-api-access\") pod \"installer-9-crc\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:39 crc kubenswrapper[4705]: I0124 07:44:39.367618 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:44:40 crc kubenswrapper[4705]: E0124 07:44:40.790672 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9qgbp" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" Jan 24 07:44:40 crc kubenswrapper[4705]: E0124 07:44:40.867579 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 24 07:44:40 crc kubenswrapper[4705]: E0124 07:44:40.867745 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfv5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7lfr8_openshift-marketplace(db373164-2a89-4bd8-803b-3a3c4554846c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:40 crc kubenswrapper[4705]: E0124 07:44:40.869129 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7lfr8" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" Jan 24 07:44:40 crc kubenswrapper[4705]: E0124 07:44:40.900112 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 24 07:44:40 crc kubenswrapper[4705]: E0124 07:44:40.900427 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf96d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-64v8j_openshift-marketplace(9eace400-39bb-4f2a-ab2f-379a8fd3e8c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:40 crc kubenswrapper[4705]: E0124 07:44:40.902010 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-64v8j" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" Jan 24 07:44:42 crc kubenswrapper[4705]: E0124 07:44:42.096867 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-64v8j" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" Jan 24 07:44:42 crc kubenswrapper[4705]: E0124 07:44:42.096957 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7lfr8" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" Jan 24 07:44:42 crc kubenswrapper[4705]: E0124 07:44:42.172797 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 07:44:42 crc kubenswrapper[4705]: E0124 07:44:42.173034 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpldv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pdt8p_openshift-marketplace(a6b1d233-4df3-4960-abd3-c8bf11ca322b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:42 crc kubenswrapper[4705]: E0124 07:44:42.174216 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pdt8p" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.529112 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pdt8p" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" Jan 24 07:44:43 crc kubenswrapper[4705]: I0124 07:44:43.540295 4705 scope.go:117] "RemoveContainer" containerID="f03490bb6a056b13d32f29c4fd53f96da500a5285f84f272abb1770cb37e8fe0" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.624691 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.624908 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vhhjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4k26f_openshift-marketplace(690b269f-3c5d-47b5-a11b-6c44dd6b1f95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.626200 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4k26f" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.644268 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.645563 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8m45d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gz2gs_openshift-marketplace(7a408baf-8e2f-438d-b77f-2abd317fe09f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.649162 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gz2gs" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.682966 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.683231 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqm7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qmflx_openshift-marketplace(74ac561c-5afe-4308-814f-11bf3f93f4ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.684869 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qmflx" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.732361 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.733066 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56g7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9ffcg_openshift-marketplace(34532038-b143-4391-99f3-37275497f03e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 07:44:43 crc kubenswrapper[4705]: E0124 07:44:43.738230 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9ffcg" podUID="34532038-b143-4391-99f3-37275497f03e" Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.119469 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.119537 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.364205 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.381371 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.391205 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"55a789d696c5fbcd2185ba344cba09dd1eceead04a4521477c8a234b346679c0"} Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.396807 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.396866 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.397551 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7czb5" event={"ID":"c30fd97b-0555-479c-969f-4148e7bfb66d","Type":"ContainerStarted","Data":"fd1e9c31ad6eccafabe506ead7f1444899963b909a8bc6a76734e9a21f209179"} Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.397588 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:44:44 crc kubenswrapper[4705]: E0124 07:44:44.398445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qmflx" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" Jan 24 07:44:44 crc kubenswrapper[4705]: E0124 07:44:44.400601 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gz2gs" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" Jan 24 07:44:44 crc kubenswrapper[4705]: E0124 07:44:44.400667 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9ffcg" podUID="34532038-b143-4391-99f3-37275497f03e" Jan 24 07:44:44 crc kubenswrapper[4705]: E0124 07:44:44.400717 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4k26f" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.551790 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d"] Jan 24 07:44:44 crc kubenswrapper[4705]: I0124 07:44:44.582436 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66454fdb65-ccv75"] Jan 24 07:44:44 crc kubenswrapper[4705]: W0124 07:44:44.612069 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2102b546_58f9_4568_9333_9355cbfcc9fd.slice/crio-95fd8b5acfa4106a7bacda5343ecdba38241559b8bb07c07ed3880b495d1931d WatchSource:0}: Error finding container 95fd8b5acfa4106a7bacda5343ecdba38241559b8bb07c07ed3880b495d1931d: Status 404 returned error can't find the container with id 95fd8b5acfa4106a7bacda5343ecdba38241559b8bb07c07ed3880b495d1931d Jan 24 07:44:45 crc kubenswrapper[4705]: I0124 07:44:45.403733 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde","Type":"ContainerStarted","Data":"7730fa8db6dfed388838a43917acf7654ca7c820f7ae17eb8dad66cb547a232b"} Jan 24 07:44:45 crc kubenswrapper[4705]: I0124 07:44:45.405129 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" event={"ID":"2102b546-58f9-4568-9333-9355cbfcc9fd","Type":"ContainerStarted","Data":"95fd8b5acfa4106a7bacda5343ecdba38241559b8bb07c07ed3880b495d1931d"} Jan 24 07:44:45 crc kubenswrapper[4705]: I0124 07:44:45.406654 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"63fd1206-c28b-4e05-94ab-8935afb05436","Type":"ContainerStarted","Data":"6390e4bf0f93fe58036cb5a43d9a7c31fc5255e52758a699bd1644d16773c191"} Jan 24 07:44:45 crc kubenswrapper[4705]: I0124 07:44:45.407950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" event={"ID":"1afdc725-8d46-415e-8f17-766ca00acc1e","Type":"ContainerStarted","Data":"575588d7e8d75ed70363c70060bb8a76855cb981a0deff35fc998b97601b6b1f"} Jan 24 07:44:45 crc kubenswrapper[4705]: I0124 07:44:45.408531 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:45 crc kubenswrapper[4705]: I0124 07:44:45.408609 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.417041 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" event={"ID":"1afdc725-8d46-415e-8f17-766ca00acc1e","Type":"ContainerStarted","Data":"ce0624c471077c06b430905e2382e753ce4c5c56ac72f7a4f5b9292de6ecd333"} Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.418510 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.420354 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde","Type":"ContainerStarted","Data":"83b4cead4cb1bb907f2e8dcda6456041c3c939c80d9b90aa5de857b9f120c0fc"} Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.421477 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" event={"ID":"2102b546-58f9-4568-9333-9355cbfcc9fd","Type":"ContainerStarted","Data":"68f2fbb127d8f296ca5ee5a8ed87116b37b1a161133b4e476efebdb4e447c0e4"} Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.421581 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" podUID="2102b546-58f9-4568-9333-9355cbfcc9fd" containerName="route-controller-manager" containerID="cri-o://68f2fbb127d8f296ca5ee5a8ed87116b37b1a161133b4e476efebdb4e447c0e4" gracePeriod=30 Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.421691 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.423942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.427097 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"63fd1206-c28b-4e05-94ab-8935afb05436","Type":"ContainerStarted","Data":"8dbb2d3447860ee9ff58595bbcb0089799a5c4a33f3561129f56ed05f47877cc"} Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.427366 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.433373 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" podStartSLOduration=21.433359399 podStartE2EDuration="21.433359399s" podCreationTimestamp="2026-01-24 07:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:44:46.433137993 +0000 UTC m=+225.153011301" watchObservedRunningTime="2026-01-24 07:44:46.433359399 +0000 UTC m=+225.153232687" Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.494934 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" podStartSLOduration=41.494914025 podStartE2EDuration="41.494914025s" podCreationTimestamp="2026-01-24 07:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:44:46.47905801 +0000 UTC m=+225.198931298" watchObservedRunningTime="2026-01-24 07:44:46.494914025 +0000 UTC m=+225.214787313" Jan 24 07:44:46 crc kubenswrapper[4705]: I0124 07:44:46.505603 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=13.505586188 podStartE2EDuration="13.505586188s" podCreationTimestamp="2026-01-24 07:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:44:46.504540057 +0000 UTC m=+225.224413345" watchObservedRunningTime="2026-01-24 07:44:46.505586188 +0000 UTC m=+225.225459476" Jan 24 07:44:47 crc kubenswrapper[4705]: I0124 07:44:47.436201 4705 generic.go:334] "Generic (PLEG): container finished" podID="2102b546-58f9-4568-9333-9355cbfcc9fd" containerID="68f2fbb127d8f296ca5ee5a8ed87116b37b1a161133b4e476efebdb4e447c0e4" exitCode=0 Jan 24 07:44:47 crc kubenswrapper[4705]: I0124 07:44:47.436299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" event={"ID":"2102b546-58f9-4568-9333-9355cbfcc9fd","Type":"ContainerDied","Data":"68f2fbb127d8f296ca5ee5a8ed87116b37b1a161133b4e476efebdb4e447c0e4"} Jan 24 07:44:47 crc kubenswrapper[4705]: I0124 07:44:47.438659 4705 generic.go:334] "Generic (PLEG): container finished" podID="7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde" containerID="83b4cead4cb1bb907f2e8dcda6456041c3c939c80d9b90aa5de857b9f120c0fc" exitCode=0 Jan 24 07:44:47 crc kubenswrapper[4705]: I0124 07:44:47.438711 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde","Type":"ContainerDied","Data":"83b4cead4cb1bb907f2e8dcda6456041c3c939c80d9b90aa5de857b9f120c0fc"} Jan 24 07:44:47 crc kubenswrapper[4705]: I0124 07:44:47.456590 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=8.456574084 podStartE2EDuration="8.456574084s" podCreationTimestamp="2026-01-24 07:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:44:46.52337787 +0000 UTC m=+225.243251158" watchObservedRunningTime="2026-01-24 07:44:47.456574084 +0000 UTC m=+226.176447372" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.582781 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.628533 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw"] Jan 24 07:44:48 crc kubenswrapper[4705]: E0124 07:44:48.628862 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2102b546-58f9-4568-9333-9355cbfcc9fd" containerName="route-controller-manager" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.628876 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2102b546-58f9-4568-9333-9355cbfcc9fd" containerName="route-controller-manager" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.629001 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2102b546-58f9-4568-9333-9355cbfcc9fd" containerName="route-controller-manager" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.630633 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.661047 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw"] Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.684377 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-client-ca\") pod \"2102b546-58f9-4568-9333-9355cbfcc9fd\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.684458 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-config\") pod \"2102b546-58f9-4568-9333-9355cbfcc9fd\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.684708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2102b546-58f9-4568-9333-9355cbfcc9fd-serving-cert\") pod \"2102b546-58f9-4568-9333-9355cbfcc9fd\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.684738 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-754nf\" (UniqueName: \"kubernetes.io/projected/2102b546-58f9-4568-9333-9355cbfcc9fd-kube-api-access-754nf\") pod \"2102b546-58f9-4568-9333-9355cbfcc9fd\" (UID: \"2102b546-58f9-4568-9333-9355cbfcc9fd\") " Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.686534 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-config" (OuterVolumeSpecName: "config") pod "2102b546-58f9-4568-9333-9355cbfcc9fd" (UID: "2102b546-58f9-4568-9333-9355cbfcc9fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.687205 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "2102b546-58f9-4568-9333-9355cbfcc9fd" (UID: "2102b546-58f9-4568-9333-9355cbfcc9fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.693887 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2102b546-58f9-4568-9333-9355cbfcc9fd-kube-api-access-754nf" (OuterVolumeSpecName: "kube-api-access-754nf") pod "2102b546-58f9-4568-9333-9355cbfcc9fd" (UID: "2102b546-58f9-4568-9333-9355cbfcc9fd"). InnerVolumeSpecName "kube-api-access-754nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.694969 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2102b546-58f9-4568-9333-9355cbfcc9fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2102b546-58f9-4568-9333-9355cbfcc9fd" (UID: "2102b546-58f9-4568-9333-9355cbfcc9fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.777754 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786131 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9ed591-a137-4703-947a-4a759ff1a1eb-serving-cert\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786246 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-config\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786311 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp728\" (UniqueName: \"kubernetes.io/projected/7c9ed591-a137-4703-947a-4a759ff1a1eb-kube-api-access-tp728\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786343 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-client-ca\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786405 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2102b546-58f9-4568-9333-9355cbfcc9fd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786419 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-754nf\" (UniqueName: \"kubernetes.io/projected/2102b546-58f9-4568-9333-9355cbfcc9fd-kube-api-access-754nf\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786432 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.786445 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2102b546-58f9-4568-9333-9355cbfcc9fd-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.887940 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kube-api-access\") pod \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.887991 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kubelet-dir\") pod \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\" (UID: \"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde\") " Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.888100 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde" (UID: "7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.888116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9ed591-a137-4703-947a-4a759ff1a1eb-serving-cert\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.888260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-config\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.888406 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp728\" (UniqueName: \"kubernetes.io/projected/7c9ed591-a137-4703-947a-4a759ff1a1eb-kube-api-access-tp728\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.888461 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-client-ca\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.888594 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.889421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-client-ca\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.889573 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-config\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.893975 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde" (UID: "7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.894481 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9ed591-a137-4703-947a-4a759ff1a1eb-serving-cert\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.906530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp728\" (UniqueName: \"kubernetes.io/projected/7c9ed591-a137-4703-947a-4a759ff1a1eb-kube-api-access-tp728\") pod \"route-controller-manager-5885b8ccbc-xrnbw\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.971640 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:48 crc kubenswrapper[4705]: I0124 07:44:48.989999 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.346722 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw"] Jan 24 07:44:49 crc kubenswrapper[4705]: W0124 07:44:49.353469 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c9ed591_a137_4703_947a_4a759ff1a1eb.slice/crio-36f8a005415993215fa2ad2646a71ce51cc8fa81706e26b5762b7a56962ab311 WatchSource:0}: Error finding container 36f8a005415993215fa2ad2646a71ce51cc8fa81706e26b5762b7a56962ab311: Status 404 returned error can't find the container with id 36f8a005415993215fa2ad2646a71ce51cc8fa81706e26b5762b7a56962ab311 Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.470300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" event={"ID":"7c9ed591-a137-4703-947a-4a759ff1a1eb","Type":"ContainerStarted","Data":"36f8a005415993215fa2ad2646a71ce51cc8fa81706e26b5762b7a56962ab311"} Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.471915 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.471923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde","Type":"ContainerDied","Data":"7730fa8db6dfed388838a43917acf7654ca7c820f7ae17eb8dad66cb547a232b"} Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.471946 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7730fa8db6dfed388838a43917acf7654ca7c820f7ae17eb8dad66cb547a232b" Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.473907 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" event={"ID":"2102b546-58f9-4568-9333-9355cbfcc9fd","Type":"ContainerDied","Data":"95fd8b5acfa4106a7bacda5343ecdba38241559b8bb07c07ed3880b495d1931d"} Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.473946 4705 scope.go:117] "RemoveContainer" containerID="68f2fbb127d8f296ca5ee5a8ed87116b37b1a161133b4e476efebdb4e447c0e4" Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.473989 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d" Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.506740 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d"] Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.516488 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7764784f6c-pzx8d"] Jan 24 07:44:49 crc kubenswrapper[4705]: I0124 07:44:49.583709 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2102b546-58f9-4568-9333-9355cbfcc9fd" path="/var/lib/kubelet/pods/2102b546-58f9-4568-9333-9355cbfcc9fd/volumes" Jan 24 07:44:52 crc kubenswrapper[4705]: I0124 07:44:52.494928 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" event={"ID":"7c9ed591-a137-4703-947a-4a759ff1a1eb","Type":"ContainerStarted","Data":"55bb8ff7a647e7aeda4c6e52d75722aed986a4a422296313029107c9deb64da9"} Jan 24 07:44:52 crc kubenswrapper[4705]: I0124 07:44:52.495256 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:52 crc kubenswrapper[4705]: I0124 07:44:52.513390 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" podStartSLOduration=27.513372339 podStartE2EDuration="27.513372339s" podCreationTimestamp="2026-01-24 07:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:44:52.510794773 +0000 UTC m=+231.230668051" watchObservedRunningTime="2026-01-24 07:44:52.513372339 +0000 UTC m=+231.233245627" Jan 24 07:44:52 crc kubenswrapper[4705]: I0124 07:44:52.778847 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:44:54 crc kubenswrapper[4705]: I0124 07:44:54.072521 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:54 crc kubenswrapper[4705]: I0124 07:44:54.073122 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:54 crc kubenswrapper[4705]: I0124 07:44:54.072520 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-7czb5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 24 07:44:54 crc kubenswrapper[4705]: I0124 07:44:54.073194 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7czb5" podUID="c30fd97b-0555-479c-969f-4148e7bfb66d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 24 07:44:56 crc kubenswrapper[4705]: I0124 07:44:56.135753 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qgbp" event={"ID":"e1f81499-3c8f-40b6-bd99-344558565c77","Type":"ContainerStarted","Data":"f78bb201e5fe689d30326a139eb8ebf3e8d1e3f28fdc67752f9aba84fcdb8713"} Jan 24 07:44:57 crc kubenswrapper[4705]: I0124 07:44:57.146699 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1f81499-3c8f-40b6-bd99-344558565c77" containerID="f78bb201e5fe689d30326a139eb8ebf3e8d1e3f28fdc67752f9aba84fcdb8713" exitCode=0 Jan 24 07:44:57 crc kubenswrapper[4705]: I0124 07:44:57.147276 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qgbp" event={"ID":"e1f81499-3c8f-40b6-bd99-344558565c77","Type":"ContainerDied","Data":"f78bb201e5fe689d30326a139eb8ebf3e8d1e3f28fdc67752f9aba84fcdb8713"} Jan 24 07:44:58 crc kubenswrapper[4705]: I0124 07:44:58.155071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lfr8" event={"ID":"db373164-2a89-4bd8-803b-3a3c4554846c","Type":"ContainerStarted","Data":"968ea6723ed3f03471b89f9705eb82f1c8df84d2dccf75f86babb2d63657ddc8"} Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.310132 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m"] Jan 24 07:45:00 crc kubenswrapper[4705]: E0124 07:45:00.311383 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde" containerName="pruner" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.311414 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde" containerName="pruner" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.312550 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbf5bf1-c619-4ad4-b394-58f4f4bd3dde" containerName="pruner" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.313864 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.317986 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.319610 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.329888 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m"] Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.504884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc53e87-c43b-49ce-adf1-030634af0ad2-secret-volume\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.504966 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7kq\" (UniqueName: \"kubernetes.io/projected/2dc53e87-c43b-49ce-adf1-030634af0ad2-kube-api-access-zn7kq\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.504997 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc53e87-c43b-49ce-adf1-030634af0ad2-config-volume\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.607588 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc53e87-c43b-49ce-adf1-030634af0ad2-secret-volume\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.607688 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn7kq\" (UniqueName: \"kubernetes.io/projected/2dc53e87-c43b-49ce-adf1-030634af0ad2-kube-api-access-zn7kq\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.607720 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc53e87-c43b-49ce-adf1-030634af0ad2-config-volume\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.608657 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc53e87-c43b-49ce-adf1-030634af0ad2-config-volume\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.616428 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc53e87-c43b-49ce-adf1-030634af0ad2-secret-volume\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.663027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn7kq\" (UniqueName: \"kubernetes.io/projected/2dc53e87-c43b-49ce-adf1-030634af0ad2-kube-api-access-zn7kq\") pod \"collect-profiles-29487345-8229m\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:00 crc kubenswrapper[4705]: I0124 07:45:00.953511 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:01 crc kubenswrapper[4705]: I0124 07:45:01.736095 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m"] Jan 24 07:45:02 crc kubenswrapper[4705]: I0124 07:45:02.334396 4705 generic.go:334] "Generic (PLEG): container finished" podID="db373164-2a89-4bd8-803b-3a3c4554846c" containerID="968ea6723ed3f03471b89f9705eb82f1c8df84d2dccf75f86babb2d63657ddc8" exitCode=0 Jan 24 07:45:02 crc kubenswrapper[4705]: I0124 07:45:02.334458 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lfr8" event={"ID":"db373164-2a89-4bd8-803b-3a3c4554846c","Type":"ContainerDied","Data":"968ea6723ed3f03471b89f9705eb82f1c8df84d2dccf75f86babb2d63657ddc8"} Jan 24 07:45:02 crc kubenswrapper[4705]: I0124 07:45:02.336911 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" event={"ID":"2dc53e87-c43b-49ce-adf1-030634af0ad2","Type":"ContainerStarted","Data":"510c881c1e09b95d50204e02371cfd62c40bd7fb5d97896f57cf6468908bc025"} Jan 24 07:45:03 crc kubenswrapper[4705]: I0124 07:45:03.345357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qgbp" event={"ID":"e1f81499-3c8f-40b6-bd99-344558565c77","Type":"ContainerStarted","Data":"dfaa7523274614e9b6ae7139d225a56acc53ca076b355aac7985664fe9cc9459"} Jan 24 07:45:03 crc kubenswrapper[4705]: I0124 07:45:03.347584 4705 generic.go:334] "Generic (PLEG): container finished" podID="2dc53e87-c43b-49ce-adf1-030634af0ad2" containerID="72d39d9f81fa8dd014c45af7794cc09ffdaddd67b76bf54199559b68edebe2f9" exitCode=0 Jan 24 07:45:03 crc kubenswrapper[4705]: I0124 07:45:03.347632 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" event={"ID":"2dc53e87-c43b-49ce-adf1-030634af0ad2","Type":"ContainerDied","Data":"72d39d9f81fa8dd014c45af7794cc09ffdaddd67b76bf54199559b68edebe2f9"} Jan 24 07:45:03 crc kubenswrapper[4705]: I0124 07:45:03.382730 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9qgbp" podStartSLOduration=5.318713247 podStartE2EDuration="1m14.382709738s" podCreationTimestamp="2026-01-24 07:43:49 +0000 UTC" firstStartedPulling="2026-01-24 07:43:51.738912152 +0000 UTC m=+170.458785440" lastFinishedPulling="2026-01-24 07:45:00.802908643 +0000 UTC m=+239.522781931" observedRunningTime="2026-01-24 07:45:03.366281046 +0000 UTC m=+242.086154334" watchObservedRunningTime="2026-01-24 07:45:03.382709738 +0000 UTC m=+242.102583016" Jan 24 07:45:04 crc kubenswrapper[4705]: I0124 07:45:04.088610 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7czb5" Jan 24 07:45:04 crc kubenswrapper[4705]: I0124 07:45:04.977034 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.186242 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc53e87-c43b-49ce-adf1-030634af0ad2-config-volume\") pod \"2dc53e87-c43b-49ce-adf1-030634af0ad2\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.186330 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn7kq\" (UniqueName: \"kubernetes.io/projected/2dc53e87-c43b-49ce-adf1-030634af0ad2-kube-api-access-zn7kq\") pod \"2dc53e87-c43b-49ce-adf1-030634af0ad2\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.186437 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc53e87-c43b-49ce-adf1-030634af0ad2-secret-volume\") pod \"2dc53e87-c43b-49ce-adf1-030634af0ad2\" (UID: \"2dc53e87-c43b-49ce-adf1-030634af0ad2\") " Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.188479 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc53e87-c43b-49ce-adf1-030634af0ad2-config-volume" (OuterVolumeSpecName: "config-volume") pod "2dc53e87-c43b-49ce-adf1-030634af0ad2" (UID: "2dc53e87-c43b-49ce-adf1-030634af0ad2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.193193 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc53e87-c43b-49ce-adf1-030634af0ad2-kube-api-access-zn7kq" (OuterVolumeSpecName: "kube-api-access-zn7kq") pod "2dc53e87-c43b-49ce-adf1-030634af0ad2" (UID: "2dc53e87-c43b-49ce-adf1-030634af0ad2"). InnerVolumeSpecName "kube-api-access-zn7kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.203142 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc53e87-c43b-49ce-adf1-030634af0ad2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2dc53e87-c43b-49ce-adf1-030634af0ad2" (UID: "2dc53e87-c43b-49ce-adf1-030634af0ad2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.264838 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66454fdb65-ccv75"] Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.265088 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" podUID="1afdc725-8d46-415e-8f17-766ca00acc1e" containerName="controller-manager" containerID="cri-o://ce0624c471077c06b430905e2382e753ce4c5c56ac72f7a4f5b9292de6ecd333" gracePeriod=30 Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.287120 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc53e87-c43b-49ce-adf1-030634af0ad2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.287155 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn7kq\" (UniqueName: \"kubernetes.io/projected/2dc53e87-c43b-49ce-adf1-030634af0ad2-kube-api-access-zn7kq\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.287167 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc53e87-c43b-49ce-adf1-030634af0ad2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.380875 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lfr8" event={"ID":"db373164-2a89-4bd8-803b-3a3c4554846c","Type":"ContainerStarted","Data":"1088c2d5a2b5091864a4dacee01e2982ae25d1b9f43d306abbbfafb7c4daf98d"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.383909 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k26f" event={"ID":"690b269f-3c5d-47b5-a11b-6c44dd6b1f95","Type":"ContainerStarted","Data":"53565f93ddf7b9091a305d0076a53d5c7664b6ddac1b344e16a8bf70ae4b0067"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.387329 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.387476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m" event={"ID":"2dc53e87-c43b-49ce-adf1-030634af0ad2","Type":"ContainerDied","Data":"510c881c1e09b95d50204e02371cfd62c40bd7fb5d97896f57cf6468908bc025"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.387504 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="510c881c1e09b95d50204e02371cfd62c40bd7fb5d97896f57cf6468908bc025" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.392794 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz2gs" event={"ID":"7a408baf-8e2f-438d-b77f-2abd317fe09f","Type":"ContainerStarted","Data":"8342a78d571d810594b9915ee0121d763ed997f754b4c9f35192c753144333f2"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.397997 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerStarted","Data":"22b65463a8b3ea5d3cd5c8b09f1e8a69299d33fc391a25c9985b80354106f4e7"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.400320 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdt8p" event={"ID":"a6b1d233-4df3-4960-abd3-c8bf11ca322b","Type":"ContainerStarted","Data":"ccf2f28c8aa48fc157bef752661babfd42c2e633f56eb1b84869539aedcae6bd"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.403134 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmflx" event={"ID":"74ac561c-5afe-4308-814f-11bf3f93f4ac","Type":"ContainerStarted","Data":"be1366f49dc6002add07bc8c11746f580d80b16c29f088cfe99727fdbef7cf7d"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.405573 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64v8j" event={"ID":"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2","Type":"ContainerStarted","Data":"9cde2e9b124ad99ad229b7688c3a6a715a611d28e69d543e397d33517e463366"} Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.485098 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7lfr8" podStartSLOduration=6.175747354 podStartE2EDuration="1m13.485061579s" podCreationTimestamp="2026-01-24 07:43:52 +0000 UTC" firstStartedPulling="2026-01-24 07:43:57.30975559 +0000 UTC m=+176.029628878" lastFinishedPulling="2026-01-24 07:45:04.619069815 +0000 UTC m=+243.338943103" observedRunningTime="2026-01-24 07:45:05.480604908 +0000 UTC m=+244.200478196" watchObservedRunningTime="2026-01-24 07:45:05.485061579 +0000 UTC m=+244.204934867" Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.694462 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw"] Jan 24 07:45:05 crc kubenswrapper[4705]: I0124 07:45:05.694942 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" podUID="7c9ed591-a137-4703-947a-4a759ff1a1eb" containerName="route-controller-manager" containerID="cri-o://55bb8ff7a647e7aeda4c6e52d75722aed986a4a422296313029107c9deb64da9" gracePeriod=30 Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.440849 4705 generic.go:334] "Generic (PLEG): container finished" podID="7c9ed591-a137-4703-947a-4a759ff1a1eb" containerID="55bb8ff7a647e7aeda4c6e52d75722aed986a4a422296313029107c9deb64da9" exitCode=0 Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.440981 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" event={"ID":"7c9ed591-a137-4703-947a-4a759ff1a1eb","Type":"ContainerDied","Data":"55bb8ff7a647e7aeda4c6e52d75722aed986a4a422296313029107c9deb64da9"} Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.444405 4705 generic.go:334] "Generic (PLEG): container finished" podID="1afdc725-8d46-415e-8f17-766ca00acc1e" containerID="ce0624c471077c06b430905e2382e753ce4c5c56ac72f7a4f5b9292de6ecd333" exitCode=0 Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.444463 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" event={"ID":"1afdc725-8d46-415e-8f17-766ca00acc1e","Type":"ContainerDied","Data":"ce0624c471077c06b430905e2382e753ce4c5c56ac72f7a4f5b9292de6ecd333"} Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.447426 4705 generic.go:334] "Generic (PLEG): container finished" podID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerID="ccf2f28c8aa48fc157bef752661babfd42c2e633f56eb1b84869539aedcae6bd" exitCode=0 Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.447515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdt8p" event={"ID":"a6b1d233-4df3-4960-abd3-c8bf11ca322b","Type":"ContainerDied","Data":"ccf2f28c8aa48fc157bef752661babfd42c2e633f56eb1b84869539aedcae6bd"} Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.455932 4705 generic.go:334] "Generic (PLEG): container finished" podID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerID="be1366f49dc6002add07bc8c11746f580d80b16c29f088cfe99727fdbef7cf7d" exitCode=0 Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.455978 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmflx" event={"ID":"74ac561c-5afe-4308-814f-11bf3f93f4ac","Type":"ContainerDied","Data":"be1366f49dc6002add07bc8c11746f580d80b16c29f088cfe99727fdbef7cf7d"} Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.794637 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.828805 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k"] Jan 24 07:45:06 crc kubenswrapper[4705]: E0124 07:45:06.830990 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9ed591-a137-4703-947a-4a759ff1a1eb" containerName="route-controller-manager" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.831154 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9ed591-a137-4703-947a-4a759ff1a1eb" containerName="route-controller-manager" Jan 24 07:45:06 crc kubenswrapper[4705]: E0124 07:45:06.831227 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc53e87-c43b-49ce-adf1-030634af0ad2" containerName="collect-profiles" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.831290 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc53e87-c43b-49ce-adf1-030634af0ad2" containerName="collect-profiles" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.831468 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9ed591-a137-4703-947a-4a759ff1a1eb" containerName="route-controller-manager" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.831532 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc53e87-c43b-49ce-adf1-030634af0ad2" containerName="collect-profiles" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.832027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.833120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k"] Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-config\") pod \"7c9ed591-a137-4703-947a-4a759ff1a1eb\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893178 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-client-ca\") pod \"7c9ed591-a137-4703-947a-4a759ff1a1eb\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893210 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9ed591-a137-4703-947a-4a759ff1a1eb-serving-cert\") pod \"7c9ed591-a137-4703-947a-4a759ff1a1eb\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893247 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp728\" (UniqueName: \"kubernetes.io/projected/7c9ed591-a137-4703-947a-4a759ff1a1eb-kube-api-access-tp728\") pod \"7c9ed591-a137-4703-947a-4a759ff1a1eb\" (UID: \"7c9ed591-a137-4703-947a-4a759ff1a1eb\") " Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-client-ca\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893580 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-config\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893637 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-serving-cert\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.893676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqc5x\" (UniqueName: \"kubernetes.io/projected/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-kube-api-access-kqc5x\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.894720 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "7c9ed591-a137-4703-947a-4a759ff1a1eb" (UID: "7c9ed591-a137-4703-947a-4a759ff1a1eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.894880 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-config" (OuterVolumeSpecName: "config") pod "7c9ed591-a137-4703-947a-4a759ff1a1eb" (UID: "7c9ed591-a137-4703-947a-4a759ff1a1eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.900572 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9ed591-a137-4703-947a-4a759ff1a1eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c9ed591-a137-4703-947a-4a759ff1a1eb" (UID: "7c9ed591-a137-4703-947a-4a759ff1a1eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:45:06 crc kubenswrapper[4705]: I0124 07:45:06.901046 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9ed591-a137-4703-947a-4a759ff1a1eb-kube-api-access-tp728" (OuterVolumeSpecName: "kube-api-access-tp728") pod "7c9ed591-a137-4703-947a-4a759ff1a1eb" (UID: "7c9ed591-a137-4703-947a-4a759ff1a1eb"). InnerVolumeSpecName "kube-api-access-tp728". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.070981 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqc5x\" (UniqueName: \"kubernetes.io/projected/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-kube-api-access-kqc5x\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.071070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-client-ca\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.071102 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-config\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.071143 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-serving-cert\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.071365 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.072195 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9ed591-a137-4703-947a-4a759ff1a1eb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.072235 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9ed591-a137-4703-947a-4a759ff1a1eb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.072254 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp728\" (UniqueName: \"kubernetes.io/projected/7c9ed591-a137-4703-947a-4a759ff1a1eb-kube-api-access-tp728\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.072406 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-config\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.072707 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-client-ca\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.074733 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-serving-cert\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.089544 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqc5x\" (UniqueName: \"kubernetes.io/projected/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-kube-api-access-kqc5x\") pod \"route-controller-manager-744bc7f9db-8nz6k\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.150458 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.161320 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.274570 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-config\") pod \"1afdc725-8d46-415e-8f17-766ca00acc1e\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.274707 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-client-ca\") pod \"1afdc725-8d46-415e-8f17-766ca00acc1e\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.274749 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz6dl\" (UniqueName: \"kubernetes.io/projected/1afdc725-8d46-415e-8f17-766ca00acc1e-kube-api-access-xz6dl\") pod \"1afdc725-8d46-415e-8f17-766ca00acc1e\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.274794 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-proxy-ca-bundles\") pod \"1afdc725-8d46-415e-8f17-766ca00acc1e\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.274850 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afdc725-8d46-415e-8f17-766ca00acc1e-serving-cert\") pod \"1afdc725-8d46-415e-8f17-766ca00acc1e\" (UID: \"1afdc725-8d46-415e-8f17-766ca00acc1e\") " Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.276867 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1afdc725-8d46-415e-8f17-766ca00acc1e" (UID: "1afdc725-8d46-415e-8f17-766ca00acc1e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.276942 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-config" (OuterVolumeSpecName: "config") pod "1afdc725-8d46-415e-8f17-766ca00acc1e" (UID: "1afdc725-8d46-415e-8f17-766ca00acc1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.276158 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-client-ca" (OuterVolumeSpecName: "client-ca") pod "1afdc725-8d46-415e-8f17-766ca00acc1e" (UID: "1afdc725-8d46-415e-8f17-766ca00acc1e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.299085 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1afdc725-8d46-415e-8f17-766ca00acc1e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1afdc725-8d46-415e-8f17-766ca00acc1e" (UID: "1afdc725-8d46-415e-8f17-766ca00acc1e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.369097 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afdc725-8d46-415e-8f17-766ca00acc1e-kube-api-access-xz6dl" (OuterVolumeSpecName: "kube-api-access-xz6dl") pod "1afdc725-8d46-415e-8f17-766ca00acc1e" (UID: "1afdc725-8d46-415e-8f17-766ca00acc1e"). InnerVolumeSpecName "kube-api-access-xz6dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.376707 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.376761 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz6dl\" (UniqueName: \"kubernetes.io/projected/1afdc725-8d46-415e-8f17-766ca00acc1e-kube-api-access-xz6dl\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.376783 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.376804 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afdc725-8d46-415e-8f17-766ca00acc1e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.376838 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afdc725-8d46-415e-8f17-766ca00acc1e-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.467894 4705 generic.go:334] "Generic (PLEG): container finished" podID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerID="53565f93ddf7b9091a305d0076a53d5c7664b6ddac1b344e16a8bf70ae4b0067" exitCode=0 Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.468035 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k26f" event={"ID":"690b269f-3c5d-47b5-a11b-6c44dd6b1f95","Type":"ContainerDied","Data":"53565f93ddf7b9091a305d0076a53d5c7664b6ddac1b344e16a8bf70ae4b0067"} Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.479344 4705 generic.go:334] "Generic (PLEG): container finished" podID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerID="8342a78d571d810594b9915ee0121d763ed997f754b4c9f35192c753144333f2" exitCode=0 Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.479432 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz2gs" event={"ID":"7a408baf-8e2f-438d-b77f-2abd317fe09f","Type":"ContainerDied","Data":"8342a78d571d810594b9915ee0121d763ed997f754b4c9f35192c753144333f2"} Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.487232 4705 generic.go:334] "Generic (PLEG): container finished" podID="34532038-b143-4391-99f3-37275497f03e" containerID="22b65463a8b3ea5d3cd5c8b09f1e8a69299d33fc391a25c9985b80354106f4e7" exitCode=0 Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.487358 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerDied","Data":"22b65463a8b3ea5d3cd5c8b09f1e8a69299d33fc391a25c9985b80354106f4e7"} Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.501433 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" event={"ID":"7c9ed591-a137-4703-947a-4a759ff1a1eb","Type":"ContainerDied","Data":"36f8a005415993215fa2ad2646a71ce51cc8fa81706e26b5762b7a56962ab311"} Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.501522 4705 scope.go:117] "RemoveContainer" containerID="55bb8ff7a647e7aeda4c6e52d75722aed986a4a422296313029107c9deb64da9" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.501733 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.514374 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" event={"ID":"1afdc725-8d46-415e-8f17-766ca00acc1e","Type":"ContainerDied","Data":"575588d7e8d75ed70363c70060bb8a76855cb981a0deff35fc998b97601b6b1f"} Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.514558 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66454fdb65-ccv75" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.570050 4705 scope.go:117] "RemoveContainer" containerID="ce0624c471077c06b430905e2382e753ce4c5c56ac72f7a4f5b9292de6ecd333" Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.632000 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66454fdb65-ccv75"] Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.643082 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66454fdb65-ccv75"] Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.658404 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw"] Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.664035 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5885b8ccbc-xrnbw"] Jan 24 07:45:07 crc kubenswrapper[4705]: I0124 07:45:07.766408 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k"] Jan 24 07:45:07 crc kubenswrapper[4705]: W0124 07:45:07.781622 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab3a70b_8e34_4cd8_8b34_c8c17ea9d429.slice/crio-3b23f6147d33ec22b865ebb04d14183d42872b03cc8e87a7d8cc59bea5c0fdd8 WatchSource:0}: Error finding container 3b23f6147d33ec22b865ebb04d14183d42872b03cc8e87a7d8cc59bea5c0fdd8: Status 404 returned error can't find the container with id 3b23f6147d33ec22b865ebb04d14183d42872b03cc8e87a7d8cc59bea5c0fdd8 Jan 24 07:45:08 crc kubenswrapper[4705]: I0124 07:45:08.522899 4705 generic.go:334] "Generic (PLEG): container finished" podID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerID="9cde2e9b124ad99ad229b7688c3a6a715a611d28e69d543e397d33517e463366" exitCode=0 Jan 24 07:45:08 crc kubenswrapper[4705]: I0124 07:45:08.522967 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64v8j" event={"ID":"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2","Type":"ContainerDied","Data":"9cde2e9b124ad99ad229b7688c3a6a715a611d28e69d543e397d33517e463366"} Jan 24 07:45:08 crc kubenswrapper[4705]: I0124 07:45:08.524733 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" event={"ID":"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429","Type":"ContainerStarted","Data":"3b23f6147d33ec22b865ebb04d14183d42872b03cc8e87a7d8cc59bea5c0fdd8"} Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.531793 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" event={"ID":"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429","Type":"ContainerStarted","Data":"5aa474d8375f6553144fd88b4baa56a55f5152b93e2a91183f25b77398c9b905"} Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.532606 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.536898 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.552298 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" podStartSLOduration=4.552279224 podStartE2EDuration="4.552279224s" podCreationTimestamp="2026-01-24 07:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:45:09.550634506 +0000 UTC m=+248.270507804" watchObservedRunningTime="2026-01-24 07:45:09.552279224 +0000 UTC m=+248.272152512" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.581804 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afdc725-8d46-415e-8f17-766ca00acc1e" path="/var/lib/kubelet/pods/1afdc725-8d46-415e-8f17-766ca00acc1e/volumes" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.582611 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9ed591-a137-4703-947a-4a759ff1a1eb" path="/var/lib/kubelet/pods/7c9ed591-a137-4703-947a-4a759ff1a1eb/volumes" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.811996 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b76f947cf-swbj9"] Jan 24 07:45:09 crc kubenswrapper[4705]: E0124 07:45:09.812235 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afdc725-8d46-415e-8f17-766ca00acc1e" containerName="controller-manager" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.812247 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afdc725-8d46-415e-8f17-766ca00acc1e" containerName="controller-manager" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.812381 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afdc725-8d46-415e-8f17-766ca00acc1e" containerName="controller-manager" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.812774 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.814430 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.814971 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.817353 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.817673 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.817787 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.818132 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.825691 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.834147 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b76f947cf-swbj9"] Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.871785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.871851 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.919380 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-config\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.919432 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvlnz\" (UniqueName: \"kubernetes.io/projected/a4148672-7a7b-43c5-a453-9ea6904bab91-kube-api-access-fvlnz\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.919533 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4148672-7a7b-43c5-a453-9ea6904bab91-serving-cert\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.919603 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-client-ca\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:09 crc kubenswrapper[4705]: I0124 07:45:09.919753 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-proxy-ca-bundles\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.021425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-config\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.021487 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvlnz\" (UniqueName: \"kubernetes.io/projected/a4148672-7a7b-43c5-a453-9ea6904bab91-kube-api-access-fvlnz\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.021534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4148672-7a7b-43c5-a453-9ea6904bab91-serving-cert\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.021566 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-client-ca\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.021604 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-proxy-ca-bundles\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.023452 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-proxy-ca-bundles\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.023724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-client-ca\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.023946 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-config\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.027743 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4148672-7a7b-43c5-a453-9ea6904bab91-serving-cert\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.038135 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvlnz\" (UniqueName: \"kubernetes.io/projected/a4148672-7a7b-43c5-a453-9ea6904bab91-kube-api-access-fvlnz\") pod \"controller-manager-5b76f947cf-swbj9\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.130647 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:10 crc kubenswrapper[4705]: I0124 07:45:10.958774 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:45:11 crc kubenswrapper[4705]: I0124 07:45:11.002715 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:45:13 crc kubenswrapper[4705]: I0124 07:45:13.325507 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:45:13 crc kubenswrapper[4705]: I0124 07:45:13.325878 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:45:13 crc kubenswrapper[4705]: I0124 07:45:13.402843 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:45:13 crc kubenswrapper[4705]: I0124 07:45:13.583682 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:45:13 crc kubenswrapper[4705]: I0124 07:45:13.979415 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lfr8"] Jan 24 07:45:15 crc kubenswrapper[4705]: I0124 07:45:15.561683 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7lfr8" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="registry-server" containerID="cri-o://1088c2d5a2b5091864a4dacee01e2982ae25d1b9f43d306abbbfafb7c4daf98d" gracePeriod=2 Jan 24 07:45:16 crc kubenswrapper[4705]: I0124 07:45:16.905830 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b76f947cf-swbj9"] Jan 24 07:45:16 crc kubenswrapper[4705]: W0124 07:45:16.910863 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4148672_7a7b_43c5_a453_9ea6904bab91.slice/crio-fe648f97db6ad869d3d1d0cf56a3c08937bca90546654ca21668f60a57406bbe WatchSource:0}: Error finding container fe648f97db6ad869d3d1d0cf56a3c08937bca90546654ca21668f60a57406bbe: Status 404 returned error can't find the container with id fe648f97db6ad869d3d1d0cf56a3c08937bca90546654ca21668f60a57406bbe Jan 24 07:45:17 crc kubenswrapper[4705]: I0124 07:45:17.579206 4705 generic.go:334] "Generic (PLEG): container finished" podID="db373164-2a89-4bd8-803b-3a3c4554846c" containerID="1088c2d5a2b5091864a4dacee01e2982ae25d1b9f43d306abbbfafb7c4daf98d" exitCode=0 Jan 24 07:45:17 crc kubenswrapper[4705]: I0124 07:45:17.585389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" event={"ID":"a4148672-7a7b-43c5-a453-9ea6904bab91","Type":"ContainerStarted","Data":"fe648f97db6ad869d3d1d0cf56a3c08937bca90546654ca21668f60a57406bbe"} Jan 24 07:45:17 crc kubenswrapper[4705]: I0124 07:45:17.585461 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lfr8" event={"ID":"db373164-2a89-4bd8-803b-3a3c4554846c","Type":"ContainerDied","Data":"1088c2d5a2b5091864a4dacee01e2982ae25d1b9f43d306abbbfafb7c4daf98d"} Jan 24 07:45:19 crc kubenswrapper[4705]: I0124 07:45:19.592664 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdt8p" event={"ID":"a6b1d233-4df3-4960-abd3-c8bf11ca322b","Type":"ContainerStarted","Data":"59a7b44cc571d89ebc9d625342a7e5eae37f2b14cc56e7a1f711e17fd711ae57"} Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.598171 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" event={"ID":"a4148672-7a7b-43c5-a453-9ea6904bab91","Type":"ContainerStarted","Data":"1b7e21c637ee8b9d112fb0e09e5c8b6596d25bbcfb98392de4abf977ee18e2d0"} Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.598497 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.603252 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.615258 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pdt8p" podStartSLOduration=9.280608042 podStartE2EDuration="1m29.615239793s" podCreationTimestamp="2026-01-24 07:43:51 +0000 UTC" firstStartedPulling="2026-01-24 07:43:56.17807227 +0000 UTC m=+174.897945558" lastFinishedPulling="2026-01-24 07:45:16.512704021 +0000 UTC m=+255.232577309" observedRunningTime="2026-01-24 07:45:20.613865052 +0000 UTC m=+259.333738340" watchObservedRunningTime="2026-01-24 07:45:20.615239793 +0000 UTC m=+259.335113091" Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.631251 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" podStartSLOduration=15.631232002 podStartE2EDuration="15.631232002s" podCreationTimestamp="2026-01-24 07:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:45:20.629660076 +0000 UTC m=+259.349533374" watchObservedRunningTime="2026-01-24 07:45:20.631232002 +0000 UTC m=+259.351105290" Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.838331 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.923394 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-utilities\") pod \"db373164-2a89-4bd8-803b-3a3c4554846c\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.923509 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-catalog-content\") pod \"db373164-2a89-4bd8-803b-3a3c4554846c\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.923550 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfv5s\" (UniqueName: \"kubernetes.io/projected/db373164-2a89-4bd8-803b-3a3c4554846c-kube-api-access-zfv5s\") pod \"db373164-2a89-4bd8-803b-3a3c4554846c\" (UID: \"db373164-2a89-4bd8-803b-3a3c4554846c\") " Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.924633 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-utilities" (OuterVolumeSpecName: "utilities") pod "db373164-2a89-4bd8-803b-3a3c4554846c" (UID: "db373164-2a89-4bd8-803b-3a3c4554846c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.925001 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:20 crc kubenswrapper[4705]: I0124 07:45:20.931626 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db373164-2a89-4bd8-803b-3a3c4554846c-kube-api-access-zfv5s" (OuterVolumeSpecName: "kube-api-access-zfv5s") pod "db373164-2a89-4bd8-803b-3a3c4554846c" (UID: "db373164-2a89-4bd8-803b-3a3c4554846c"). InnerVolumeSpecName "kube-api-access-zfv5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.026135 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfv5s\" (UniqueName: \"kubernetes.io/projected/db373164-2a89-4bd8-803b-3a3c4554846c-kube-api-access-zfv5s\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.053982 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db373164-2a89-4bd8-803b-3a3c4554846c" (UID: "db373164-2a89-4bd8-803b-3a3c4554846c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.126895 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db373164-2a89-4bd8-803b-3a3c4554846c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.608349 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lfr8" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.609005 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lfr8" event={"ID":"db373164-2a89-4bd8-803b-3a3c4554846c","Type":"ContainerDied","Data":"e162568b692f4b8871efa8baef87d0d2f18cb980e56e2b9608f268be97549d39"} Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.609037 4705 scope.go:117] "RemoveContainer" containerID="1088c2d5a2b5091864a4dacee01e2982ae25d1b9f43d306abbbfafb7c4daf98d" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.630693 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lfr8"] Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.633958 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7lfr8"] Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.698781 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.700087 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:45:21 crc kubenswrapper[4705]: I0124 07:45:21.745296 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:45:23 crc kubenswrapper[4705]: I0124 07:45:23.585138 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" path="/var/lib/kubelet/pods/db373164-2a89-4bd8-803b-3a3c4554846c/volumes" Jan 24 07:45:23 crc kubenswrapper[4705]: I0124 07:45:23.714812 4705 scope.go:117] "RemoveContainer" containerID="968ea6723ed3f03471b89f9705eb82f1c8df84d2dccf75f86babb2d63657ddc8" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.062129 4705 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.063461 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.063786 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="extract-content" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.063810 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="extract-content" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.063859 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="registry-server" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.063869 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="registry-server" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.063892 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="extract-utilities" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.063902 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="extract-utilities" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.064032 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="db373164-2a89-4bd8-803b-3a3c4554846c" containerName="registry-server" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.064426 4705 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.064536 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.064976 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065157 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704" gracePeriod=15 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065191 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07" gracePeriod=15 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065271 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37" gracePeriod=15 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065312 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29" gracePeriod=15 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065348 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212" gracePeriod=15 Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.065386 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065402 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.065415 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065421 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.065432 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065437 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.065448 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065454 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.065464 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065471 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.065478 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065485 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.065493 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065500 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065609 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065627 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065637 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065647 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065656 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.065667 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166264 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166281 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166364 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166393 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166416 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166441 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.166461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267577 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267662 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267704 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267772 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267810 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267863 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267872 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267926 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267956 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.267982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.268019 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:24 crc kubenswrapper[4705]: E0124 07:45:24.554181 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.15:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-9ffcg.188d9b10ad2b1669 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-9ffcg,UID:34532038-b143-4391-99f3-37275497f03e,APIVersion:v1,ResourceVersion:28473,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 17.064s (17.064s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 07:45:24.553537129 +0000 UTC m=+263.273410417,LastTimestamp:2026-01-24 07:45:24.553537129 +0000 UTC m=+263.273410417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.624566 4705 generic.go:334] "Generic (PLEG): container finished" podID="63fd1206-c28b-4e05-94ab-8935afb05436" containerID="8dbb2d3447860ee9ff58595bbcb0089799a5c4a33f3561129f56ed05f47877cc" exitCode=0 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.624675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"63fd1206-c28b-4e05-94ab-8935afb05436","Type":"ContainerDied","Data":"8dbb2d3447860ee9ff58595bbcb0089799a5c4a33f3561129f56ed05f47877cc"} Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.625384 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.625717 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.629076 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.630411 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.631048 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07" exitCode=0 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.631069 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37" exitCode=0 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.631077 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29" exitCode=0 Jan 24 07:45:24 crc kubenswrapper[4705]: I0124 07:45:24.631085 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212" exitCode=2 Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.344217 4705 scope.go:117] "RemoveContainer" containerID="94567fa029df50a2a1a0d69037ecd285eab9a217eaace19bb12109a7f107b7e5" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.386842 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.387516 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.448492 4705 scope.go:117] "RemoveContainer" containerID="b3aafbeafdcd576883bdfaad569a87cdf04ffe73f8849bb10f04bf0d6c8b16a3" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.495220 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-kubelet-dir\") pod \"63fd1206-c28b-4e05-94ab-8935afb05436\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.495289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63fd1206-c28b-4e05-94ab-8935afb05436-kube-api-access\") pod \"63fd1206-c28b-4e05-94ab-8935afb05436\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.495399 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-var-lock\") pod \"63fd1206-c28b-4e05-94ab-8935afb05436\" (UID: \"63fd1206-c28b-4e05-94ab-8935afb05436\") " Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.495410 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "63fd1206-c28b-4e05-94ab-8935afb05436" (UID: "63fd1206-c28b-4e05-94ab-8935afb05436"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.495639 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-var-lock" (OuterVolumeSpecName: "var-lock") pod "63fd1206-c28b-4e05-94ab-8935afb05436" (UID: "63fd1206-c28b-4e05-94ab-8935afb05436"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.495704 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.503047 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63fd1206-c28b-4e05-94ab-8935afb05436-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "63fd1206-c28b-4e05-94ab-8935afb05436" (UID: "63fd1206-c28b-4e05-94ab-8935afb05436"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.598123 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63fd1206-c28b-4e05-94ab-8935afb05436-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.598165 4705 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/63fd1206-c28b-4e05-94ab-8935afb05436-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.656600 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.658934 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.659662 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.660133 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.660646 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.661260 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704" exitCode=0 Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.661363 4705 scope.go:117] "RemoveContainer" containerID="014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.664305 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"63fd1206-c28b-4e05-94ab-8935afb05436","Type":"ContainerDied","Data":"6390e4bf0f93fe58036cb5a43d9a7c31fc5255e52758a699bd1644d16773c191"} Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.664349 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.664372 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6390e4bf0f93fe58036cb5a43d9a7c31fc5255e52758a699bd1644d16773c191" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.683458 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.683670 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.695668 4705 scope.go:117] "RemoveContainer" containerID="e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.715755 4705 scope.go:117] "RemoveContainer" containerID="5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.738958 4705 scope.go:117] "RemoveContainer" containerID="ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.752411 4705 scope.go:117] "RemoveContainer" containerID="6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.769760 4705 scope.go:117] "RemoveContainer" containerID="e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.790259 4705 scope.go:117] "RemoveContainer" containerID="014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07" Jan 24 07:45:26 crc kubenswrapper[4705]: E0124 07:45:26.790742 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\": container with ID starting with 014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07 not found: ID does not exist" containerID="014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.790782 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07"} err="failed to get container status \"014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\": rpc error: code = NotFound desc = could not find container \"014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07\": container with ID starting with 014d6e6342a84c6fb6ca909c7955a2d9c9b9e434fb7bc8623fcdcac5b8248a07 not found: ID does not exist" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.790830 4705 scope.go:117] "RemoveContainer" containerID="e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37" Jan 24 07:45:26 crc kubenswrapper[4705]: E0124 07:45:26.791186 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\": container with ID starting with e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37 not found: ID does not exist" containerID="e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.791238 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37"} err="failed to get container status \"e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\": rpc error: code = NotFound desc = could not find container \"e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37\": container with ID starting with e5ccf312daeae013fcf469a66627cce03bad5c2cda72011cd1eecec29ed55d37 not found: ID does not exist" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.791272 4705 scope.go:117] "RemoveContainer" containerID="5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29" Jan 24 07:45:26 crc kubenswrapper[4705]: E0124 07:45:26.793068 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\": container with ID starting with 5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29 not found: ID does not exist" containerID="5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.793096 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29"} err="failed to get container status \"5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\": rpc error: code = NotFound desc = could not find container \"5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29\": container with ID starting with 5ba1bd2bfcd093143ddab56dc1d750fc6407ac8ce280e72ef3baec2eead8ad29 not found: ID does not exist" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.793111 4705 scope.go:117] "RemoveContainer" containerID="ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212" Jan 24 07:45:26 crc kubenswrapper[4705]: E0124 07:45:26.793454 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\": container with ID starting with ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212 not found: ID does not exist" containerID="ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.793494 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212"} err="failed to get container status \"ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\": rpc error: code = NotFound desc = could not find container \"ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212\": container with ID starting with ee6eb7b2629224941c769146ea5155188c567ad7402cc87c70d0bf428a735212 not found: ID does not exist" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.793520 4705 scope.go:117] "RemoveContainer" containerID="6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704" Jan 24 07:45:26 crc kubenswrapper[4705]: E0124 07:45:26.793866 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\": container with ID starting with 6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704 not found: ID does not exist" containerID="6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.793901 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704"} err="failed to get container status \"6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\": rpc error: code = NotFound desc = could not find container \"6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704\": container with ID starting with 6b991a21712dcd3114e617ccfedc220e9418c88a836deb723065c7e7fef80704 not found: ID does not exist" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.793921 4705 scope.go:117] "RemoveContainer" containerID="e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044" Jan 24 07:45:26 crc kubenswrapper[4705]: E0124 07:45:26.796874 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\": container with ID starting with e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044 not found: ID does not exist" containerID="e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.796906 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044"} err="failed to get container status \"e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\": rpc error: code = NotFound desc = could not find container \"e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044\": container with ID starting with e5165fa6a97d69fb94c7c1fbc3fb1b29e77eab50455c20f033779dadc5fa2044 not found: ID does not exist" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800315 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800358 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800432 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800515 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800523 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800648 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800962 4705 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800985 4705 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:26 crc kubenswrapper[4705]: I0124 07:45:26.800999 4705 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.582532 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.681607 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k26f" event={"ID":"690b269f-3c5d-47b5-a11b-6c44dd6b1f95","Type":"ContainerStarted","Data":"e9b60610faf8cb0dfa91bcfc6bc6e810a29808048c71a06dab178b904242cbe3"} Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.683531 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.684256 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.687253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz2gs" event={"ID":"7a408baf-8e2f-438d-b77f-2abd317fe09f","Type":"ContainerStarted","Data":"a38c55b284891553899c2bde7fc073ad16914ece5c5b93b3d37dd2691203c034"} Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.688478 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.692363 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.695218 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerStarted","Data":"7a68ddec5163e25591026a873d624b11f5a6f681a0ae5493d6e947de65b6f880"} Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.696923 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.699984 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.700714 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.701312 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.701556 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.701739 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.701980 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.702122 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.702259 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.702410 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.702568 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.703351 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.703502 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.703660 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.703799 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.704004 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.705873 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmflx" event={"ID":"74ac561c-5afe-4308-814f-11bf3f93f4ac","Type":"ContainerStarted","Data":"fd8c8215603005aa0d8b622be0eced4fc7f63098b55fdd29c8fb068314521345"} Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.709354 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.713991 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.714347 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.714512 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.715235 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.715616 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.716546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64v8j" event={"ID":"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2","Type":"ContainerStarted","Data":"8b3c126d86952e3d8018217e9b59406f4d562e5be64d27143c129828fd470763"} Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.717329 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.717936 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.718242 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.719026 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.719544 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.720021 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:27 crc kubenswrapper[4705]: I0124 07:45:27.720358 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:28 crc kubenswrapper[4705]: E0124 07:45:28.586851 4705 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.15:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" volumeName="registry-storage" Jan 24 07:45:29 crc kubenswrapper[4705]: E0124 07:45:29.099370 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.15:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.100056 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:29 crc kubenswrapper[4705]: E0124 07:45:29.295677 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.15:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-9ffcg.188d9b10ad2b1669 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-9ffcg,UID:34532038-b143-4391-99f3-37275497f03e,APIVersion:v1,ResourceVersion:28473,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 17.064s (17.064s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 07:45:24.553537129 +0000 UTC m=+263.273410417,LastTimestamp:2026-01-24 07:45:24.553537129 +0000 UTC m=+263.273410417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.728999 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4c88d21d1c897fea08dd3e80512a4243d36c3b7a171cd5a9676333ddd481de42"} Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.729052 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1d2d2ea10adc4213d3d463ec0e179a943b607b438facac8f042291c468ef2193"} Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.729732 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:29 crc kubenswrapper[4705]: E0124 07:45:29.729869 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.15:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.730107 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.730437 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.730672 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.730929 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:29 crc kubenswrapper[4705]: I0124 07:45:29.731245 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.200945 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.201256 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.223085 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.223319 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.240895 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.241362 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.241673 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.242141 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.242318 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.242454 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.242618 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.275236 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.275793 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.276296 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.276679 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.277161 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.277449 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.277658 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.474595 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.474646 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.511956 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.512767 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.513156 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.513396 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.513641 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.513939 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:30 crc kubenswrapper[4705]: I0124 07:45:30.514282 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.584738 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.585109 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.585498 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.586098 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.586319 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.586509 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.742072 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.742930 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.743110 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.743302 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.743490 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.743638 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.743947 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.744352 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.775970 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.776631 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.777025 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.777291 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.777677 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.777977 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.778199 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.778509 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.789067 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.789589 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.789954 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.790303 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.790534 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.790801 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.791151 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:31 crc kubenswrapper[4705]: I0124 07:45:31.791688 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: E0124 07:45:32.477891 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: E0124 07:45:32.478665 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: E0124 07:45:32.478940 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: E0124 07:45:32.479258 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: E0124 07:45:32.479697 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.479791 4705 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 24 07:45:32 crc kubenswrapper[4705]: E0124 07:45:32.480143 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="200ms" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.509392 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.509455 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.545066 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.545545 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.545699 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.545961 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.546304 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.546475 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.546620 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.546765 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: E0124 07:45:32.681089 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="400ms" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.779650 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.780335 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.780724 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.781070 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.781347 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.781592 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.781839 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.782065 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.820526 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.820577 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.860758 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.861688 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.862213 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.862684 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.863153 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.863632 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.864033 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:32 crc kubenswrapper[4705]: I0124 07:45:32.864388 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: E0124 07:45:33.081907 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="800ms" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.787967 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.788491 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.788773 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.789136 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.789525 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.789794 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.790027 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: I0124 07:45:33.790319 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:33 crc kubenswrapper[4705]: E0124 07:45:33.883043 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="1.6s" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.574880 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.575463 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.575899 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.576406 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.576742 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.577195 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.577422 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.577707 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.592709 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.592742 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:34 crc kubenswrapper[4705]: E0124 07:45:34.593118 4705 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.593610 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:34 crc kubenswrapper[4705]: W0124 07:45:34.617567 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-7090ac99d6b6e9bd82625706881d7dd87c2d773d5502e36727aac3581e91bb57 WatchSource:0}: Error finding container 7090ac99d6b6e9bd82625706881d7dd87c2d773d5502e36727aac3581e91bb57: Status 404 returned error can't find the container with id 7090ac99d6b6e9bd82625706881d7dd87c2d773d5502e36727aac3581e91bb57 Jan 24 07:45:34 crc kubenswrapper[4705]: I0124 07:45:34.755406 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7090ac99d6b6e9bd82625706881d7dd87c2d773d5502e36727aac3581e91bb57"} Jan 24 07:45:35 crc kubenswrapper[4705]: E0124 07:45:35.484106 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="3.2s" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.763814 4705 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2890da8a960f4dca8b7caf66beafe46accaf74ba6f102d5eec37d76f24bba080" exitCode=0 Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.763881 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2890da8a960f4dca8b7caf66beafe46accaf74ba6f102d5eec37d76f24bba080"} Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.764173 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.764202 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.765063 4705 status_manager.go:851] "Failed to get status for pod" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" pod="openshift-marketplace/redhat-operators-64v8j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64v8j\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:35 crc kubenswrapper[4705]: E0124 07:45:35.765065 4705 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.765415 4705 status_manager.go:851] "Failed to get status for pod" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" pod="openshift-marketplace/certified-operators-4k26f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4k26f\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.766085 4705 status_manager.go:851] "Failed to get status for pod" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" pod="openshift-marketplace/redhat-marketplace-qmflx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmflx\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.766443 4705 status_manager.go:851] "Failed to get status for pod" podUID="34532038-b143-4391-99f3-37275497f03e" pod="openshift-marketplace/certified-operators-9ffcg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9ffcg\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.766744 4705 status_manager.go:851] "Failed to get status for pod" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" pod="openshift-marketplace/community-operators-gz2gs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gz2gs\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.766993 4705 status_manager.go:851] "Failed to get status for pod" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" pod="openshift-marketplace/redhat-marketplace-pdt8p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pdt8p\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:35 crc kubenswrapper[4705]: I0124 07:45:35.767224 4705 status_manager.go:851] "Failed to get status for pod" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Jan 24 07:45:36 crc kubenswrapper[4705]: I0124 07:45:36.787574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"57c0fa7ed620ccf33dac74c103d7c6ee65f7dc47901cdfe9c4f9f6d4391dfbdb"} Jan 24 07:45:36 crc kubenswrapper[4705]: I0124 07:45:36.787937 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"94e3848916eb503c4f5ee7472c52bf2e0365701baa910c301e78c62a4172d95b"} Jan 24 07:45:36 crc kubenswrapper[4705]: I0124 07:45:36.787949 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b8c626e1f373ca97fa7b28c3126024f2737b8eff64c12e68d7ecaaa139f9fee0"} Jan 24 07:45:37 crc kubenswrapper[4705]: I0124 07:45:37.796382 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f81c56e9af076cc03b8a3dc75f24ce8a1f49bda678435d3b59fae08955041f08"} Jan 24 07:45:37 crc kubenswrapper[4705]: I0124 07:45:37.796458 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bc71f9876ea522e1ea457b659febcdce0a3d36bdbf779dd087a7755c49305346"} Jan 24 07:45:37 crc kubenswrapper[4705]: I0124 07:45:37.796513 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:37 crc kubenswrapper[4705]: I0124 07:45:37.796586 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:37 crc kubenswrapper[4705]: I0124 07:45:37.796604 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:39 crc kubenswrapper[4705]: I0124 07:45:39.594274 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:39 crc kubenswrapper[4705]: I0124 07:45:39.594628 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:39 crc kubenswrapper[4705]: I0124 07:45:39.599656 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:39 crc kubenswrapper[4705]: I0124 07:45:39.811241 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 07:45:39 crc kubenswrapper[4705]: I0124 07:45:39.811302 4705 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166" exitCode=1 Jan 24 07:45:39 crc kubenswrapper[4705]: I0124 07:45:39.811339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166"} Jan 24 07:45:39 crc kubenswrapper[4705]: I0124 07:45:39.811915 4705 scope.go:117] "RemoveContainer" containerID="006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166" Jan 24 07:45:40 crc kubenswrapper[4705]: I0124 07:45:40.512081 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:45:40 crc kubenswrapper[4705]: I0124 07:45:40.589468 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:45:40 crc kubenswrapper[4705]: I0124 07:45:40.819811 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 07:45:40 crc kubenswrapper[4705]: I0124 07:45:40.819899 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a737faf487d8621091285cbe967b5fadaf1b34eb2ad8bf743122d9dc5856b87d"} Jan 24 07:45:43 crc kubenswrapper[4705]: I0124 07:45:43.062651 4705 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:43 crc kubenswrapper[4705]: I0124 07:45:43.420389 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="723f36e1-e876-4ef4-af7b-37907b691301" Jan 24 07:45:43 crc kubenswrapper[4705]: I0124 07:45:43.834128 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:43 crc kubenswrapper[4705]: I0124 07:45:43.834155 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:43 crc kubenswrapper[4705]: I0124 07:45:43.838745 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:45:43 crc kubenswrapper[4705]: I0124 07:45:43.842414 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="723f36e1-e876-4ef4-af7b-37907b691301" Jan 24 07:45:44 crc kubenswrapper[4705]: I0124 07:45:44.840363 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:44 crc kubenswrapper[4705]: I0124 07:45:44.840703 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:45:44 crc kubenswrapper[4705]: I0124 07:45:44.843055 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="723f36e1-e876-4ef4-af7b-37907b691301" Jan 24 07:45:45 crc kubenswrapper[4705]: I0124 07:45:45.295724 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:45:45 crc kubenswrapper[4705]: I0124 07:45:45.296789 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 07:45:45 crc kubenswrapper[4705]: I0124 07:45:45.296840 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 07:45:45 crc kubenswrapper[4705]: I0124 07:45:45.331007 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:45:49 crc kubenswrapper[4705]: I0124 07:45:49.642717 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 07:45:49 crc kubenswrapper[4705]: I0124 07:45:49.675038 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 07:45:49 crc kubenswrapper[4705]: I0124 07:45:49.733029 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 07:45:49 crc kubenswrapper[4705]: I0124 07:45:49.929715 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 07:45:49 crc kubenswrapper[4705]: I0124 07:45:49.962943 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 07:45:50 crc kubenswrapper[4705]: I0124 07:45:50.554402 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 07:45:50 crc kubenswrapper[4705]: I0124 07:45:50.740547 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 07:45:50 crc kubenswrapper[4705]: I0124 07:45:50.741074 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 07:45:51 crc kubenswrapper[4705]: I0124 07:45:51.725530 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 07:45:52 crc kubenswrapper[4705]: I0124 07:45:52.414105 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 07:45:52 crc kubenswrapper[4705]: I0124 07:45:52.598180 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 07:45:53 crc kubenswrapper[4705]: I0124 07:45:53.138280 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 07:45:53 crc kubenswrapper[4705]: I0124 07:45:53.611482 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 07:45:53 crc kubenswrapper[4705]: I0124 07:45:53.816420 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 07:45:54 crc kubenswrapper[4705]: I0124 07:45:54.064355 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 07:45:54 crc kubenswrapper[4705]: I0124 07:45:54.611177 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 07:45:54 crc kubenswrapper[4705]: I0124 07:45:54.910136 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 07:45:54 crc kubenswrapper[4705]: I0124 07:45:54.933371 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.030274 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.114236 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.217456 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.295883 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.295940 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.307876 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.359460 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.463743 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 07:45:55 crc kubenswrapper[4705]: I0124 07:45:55.554660 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.238256 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.416954 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.655576 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.656927 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.718161 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.750344 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.835712 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 07:45:56 crc kubenswrapper[4705]: I0124 07:45:56.920411 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.216081 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.263521 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.447510 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.507331 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.532770 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.688582 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.719923 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.823305 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 07:45:57 crc kubenswrapper[4705]: I0124 07:45:57.890654 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.139690 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.144124 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.341633 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.341962 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.453282 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.508865 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.531598 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.656669 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.701180 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.722127 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.775742 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.838466 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.847362 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.847393 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.856684 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.927590 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 07:45:58 crc kubenswrapper[4705]: I0124 07:45:58.996300 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.050165 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.161742 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.186744 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.226935 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.392261 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.453483 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.473362 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.561434 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.609757 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.636370 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.912598 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.916114 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.916153 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.916113 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.916471 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.916490 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.916589 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 07:45:59 crc kubenswrapper[4705]: I0124 07:45:59.916617 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.101334 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.312570 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.368169 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.420489 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.511984 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.592975 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.594451 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.595326 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.599858 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.660372 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.674057 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.739561 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.794604 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.835445 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.892685 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.902025 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 07:46:00 crc kubenswrapper[4705]: I0124 07:46:00.981606 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.011642 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.137281 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.194680 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.259270 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.346526 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.346698 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.441997 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.468751 4705 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.589327 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.607109 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.696348 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.771673 4705 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.775798 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 07:46:01 crc kubenswrapper[4705]: I0124 07:46:01.929384 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.005081 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.020394 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.082126 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.160467 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.219784 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.277852 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.284766 4705 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.312702 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.315739 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.368408 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.417340 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.440334 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.517164 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.551851 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.590091 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.624436 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.743438 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.752826 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.771841 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.872934 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.929465 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 07:46:02 crc kubenswrapper[4705]: I0124 07:46:02.991878 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.017025 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.037535 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.206020 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.234009 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.285497 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.289708 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.296617 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.314970 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.354287 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.571522 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.607278 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.671863 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.676262 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.685758 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.761246 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.813345 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.823981 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.877585 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.900027 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 07:46:03 crc kubenswrapper[4705]: I0124 07:46:03.907501 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.014333 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.114431 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.135455 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.223317 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.344112 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.352904 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.394095 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.481728 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.492736 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.500729 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.515876 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.590985 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.606749 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.611259 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.677313 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.762067 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.879509 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.888876 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.898213 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.936962 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 07:46:04 crc kubenswrapper[4705]: I0124 07:46:04.960774 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.024035 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.160317 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.207483 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.295954 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.296014 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.296057 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.296493 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"a737faf487d8621091285cbe967b5fadaf1b34eb2ad8bf743122d9dc5856b87d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.296592 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://a737faf487d8621091285cbe967b5fadaf1b34eb2ad8bf743122d9dc5856b87d" gracePeriod=30 Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.323783 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.351418 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.359767 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.430702 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.498581 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.504800 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.540620 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.644556 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.696889 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.754963 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.803796 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.884341 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 07:46:05 crc kubenswrapper[4705]: I0124 07:46:05.999815 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.140265 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.277979 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.282832 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.301778 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.373646 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.404257 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.479954 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.529037 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.544442 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.595094 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.653164 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.696757 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.814401 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.816512 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.853463 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.895429 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.924650 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 07:46:06 crc kubenswrapper[4705]: I0124 07:46:06.968418 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.044187 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.238806 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.247479 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.340339 4705 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.340775 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gz2gs" podStartSLOduration=45.179484678 podStartE2EDuration="2m18.340758776s" podCreationTimestamp="2026-01-24 07:43:49 +0000 UTC" firstStartedPulling="2026-01-24 07:43:51.756678863 +0000 UTC m=+170.476552151" lastFinishedPulling="2026-01-24 07:45:24.917952961 +0000 UTC m=+263.637826249" observedRunningTime="2026-01-24 07:45:43.393342175 +0000 UTC m=+282.113215463" watchObservedRunningTime="2026-01-24 07:46:07.340758776 +0000 UTC m=+306.060632064" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.341631 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-64v8j" podStartSLOduration=46.332489211 podStartE2EDuration="2m15.341624592s" podCreationTimestamp="2026-01-24 07:43:52 +0000 UTC" firstStartedPulling="2026-01-24 07:43:57.33515492 +0000 UTC m=+176.055028208" lastFinishedPulling="2026-01-24 07:45:26.344290301 +0000 UTC m=+265.064163589" observedRunningTime="2026-01-24 07:45:43.316087663 +0000 UTC m=+282.035960951" watchObservedRunningTime="2026-01-24 07:46:07.341624592 +0000 UTC m=+306.061497880" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.342871 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9ffcg" podStartSLOduration=46.706568103 podStartE2EDuration="2m18.342864128s" podCreationTimestamp="2026-01-24 07:43:49 +0000 UTC" firstStartedPulling="2026-01-24 07:43:52.917227383 +0000 UTC m=+171.637100671" lastFinishedPulling="2026-01-24 07:45:24.553523408 +0000 UTC m=+263.273396696" observedRunningTime="2026-01-24 07:45:43.368642282 +0000 UTC m=+282.088515570" watchObservedRunningTime="2026-01-24 07:46:07.342864128 +0000 UTC m=+306.062737416" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.342948 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmflx" podStartSLOduration=46.089734429 podStartE2EDuration="2m16.342944681s" podCreationTimestamp="2026-01-24 07:43:51 +0000 UTC" firstStartedPulling="2026-01-24 07:43:56.178789781 +0000 UTC m=+174.898663059" lastFinishedPulling="2026-01-24 07:45:26.432000013 +0000 UTC m=+265.151873311" observedRunningTime="2026-01-24 07:45:43.351738611 +0000 UTC m=+282.071611909" watchObservedRunningTime="2026-01-24 07:46:07.342944681 +0000 UTC m=+306.062817969" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.343998 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4k26f" podStartSLOduration=44.932241413 podStartE2EDuration="2m18.343992502s" podCreationTimestamp="2026-01-24 07:43:49 +0000 UTC" firstStartedPulling="2026-01-24 07:43:52.933411138 +0000 UTC m=+171.653284426" lastFinishedPulling="2026-01-24 07:45:26.345162227 +0000 UTC m=+265.065035515" observedRunningTime="2026-01-24 07:45:43.337052125 +0000 UTC m=+282.056925413" watchObservedRunningTime="2026-01-24 07:46:07.343992502 +0000 UTC m=+306.063865790" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.345355 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.345400 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.345734 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.345760 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ea43a55-b49b-4cb0-bad8-cfb41ff4fb39" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.349607 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.366706 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.366688825 podStartE2EDuration="24.366688825s" podCreationTimestamp="2026-01-24 07:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:46:07.362509531 +0000 UTC m=+306.082382829" watchObservedRunningTime="2026-01-24 07:46:07.366688825 +0000 UTC m=+306.086562113" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.399300 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.446939 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.532353 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.583893 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.690478 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k26f"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.691075 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4k26f" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="registry-server" containerID="cri-o://e9b60610faf8cb0dfa91bcfc6bc6e810a29808048c71a06dab178b904242cbe3" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.695430 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ffcg"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.695696 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9ffcg" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="registry-server" containerID="cri-o://7a68ddec5163e25591026a873d624b11f5a6f681a0ae5493d6e947de65b6f880" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.706469 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.707857 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qgbp"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.708080 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9qgbp" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="registry-server" containerID="cri-o://dfaa7523274614e9b6ae7139d225a56acc53ca076b355aac7985664fe9cc9459" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.715228 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gz2gs"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.715892 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gz2gs" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="registry-server" containerID="cri-o://a38c55b284891553899c2bde7fc073ad16914ece5c5b93b3d37dd2691203c034" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.724180 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h8gjk"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.724437 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" containerID="cri-o://0ebc6faa3a0b3d7dbd41ae1ccf8aadbbecde871135e49869f7d23000319f1c5b" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.732627 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pdt8p"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.732932 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pdt8p" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="registry-server" containerID="cri-o://59a7b44cc571d89ebc9d625342a7e5eae37f2b14cc56e7a1f711e17fd711ae57" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.747369 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmflx"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.747780 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmflx" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="registry-server" containerID="cri-o://fd8c8215603005aa0d8b622be0eced4fc7f63098b55fdd29c8fb068314521345" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.769746 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64v8j"] Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.770104 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-64v8j" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="registry-server" containerID="cri-o://8b3c126d86952e3d8018217e9b59406f4d562e5be64d27143c129828fd470763" gracePeriod=30 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.788089 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.817797 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.828766 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.948862 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.982261 4705 generic.go:334] "Generic (PLEG): container finished" podID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerID="59a7b44cc571d89ebc9d625342a7e5eae37f2b14cc56e7a1f711e17fd711ae57" exitCode=0 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.982350 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdt8p" event={"ID":"a6b1d233-4df3-4960-abd3-c8bf11ca322b","Type":"ContainerDied","Data":"59a7b44cc571d89ebc9d625342a7e5eae37f2b14cc56e7a1f711e17fd711ae57"} Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.982439 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.989390 4705 generic.go:334] "Generic (PLEG): container finished" podID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerID="fd8c8215603005aa0d8b622be0eced4fc7f63098b55fdd29c8fb068314521345" exitCode=0 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.989477 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmflx" event={"ID":"74ac561c-5afe-4308-814f-11bf3f93f4ac","Type":"ContainerDied","Data":"fd8c8215603005aa0d8b622be0eced4fc7f63098b55fdd29c8fb068314521345"} Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.991773 4705 generic.go:334] "Generic (PLEG): container finished" podID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerID="8b3c126d86952e3d8018217e9b59406f4d562e5be64d27143c129828fd470763" exitCode=0 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.991833 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64v8j" event={"ID":"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2","Type":"ContainerDied","Data":"8b3c126d86952e3d8018217e9b59406f4d562e5be64d27143c129828fd470763"} Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.996404 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1f81499-3c8f-40b6-bd99-344558565c77" containerID="dfaa7523274614e9b6ae7139d225a56acc53ca076b355aac7985664fe9cc9459" exitCode=0 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.996459 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qgbp" event={"ID":"e1f81499-3c8f-40b6-bd99-344558565c77","Type":"ContainerDied","Data":"dfaa7523274614e9b6ae7139d225a56acc53ca076b355aac7985664fe9cc9459"} Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.998250 4705 generic.go:334] "Generic (PLEG): container finished" podID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerID="0ebc6faa3a0b3d7dbd41ae1ccf8aadbbecde871135e49869f7d23000319f1c5b" exitCode=0 Jan 24 07:46:07 crc kubenswrapper[4705]: I0124 07:46:07.998308 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" event={"ID":"3bb788e4-fad9-4416-9042-7a46d8ef83b3","Type":"ContainerDied","Data":"0ebc6faa3a0b3d7dbd41ae1ccf8aadbbecde871135e49869f7d23000319f1c5b"} Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.000321 4705 generic.go:334] "Generic (PLEG): container finished" podID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerID="e9b60610faf8cb0dfa91bcfc6bc6e810a29808048c71a06dab178b904242cbe3" exitCode=0 Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.000384 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k26f" event={"ID":"690b269f-3c5d-47b5-a11b-6c44dd6b1f95","Type":"ContainerDied","Data":"e9b60610faf8cb0dfa91bcfc6bc6e810a29808048c71a06dab178b904242cbe3"} Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.004138 4705 generic.go:334] "Generic (PLEG): container finished" podID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerID="a38c55b284891553899c2bde7fc073ad16914ece5c5b93b3d37dd2691203c034" exitCode=0 Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.004180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz2gs" event={"ID":"7a408baf-8e2f-438d-b77f-2abd317fe09f","Type":"ContainerDied","Data":"a38c55b284891553899c2bde7fc073ad16914ece5c5b93b3d37dd2691203c034"} Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.005660 4705 generic.go:334] "Generic (PLEG): container finished" podID="34532038-b143-4391-99f3-37275497f03e" containerID="7a68ddec5163e25591026a873d624b11f5a6f681a0ae5493d6e947de65b6f880" exitCode=0 Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.005924 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerDied","Data":"7a68ddec5163e25591026a873d624b11f5a6f681a0ae5493d6e947de65b6f880"} Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.128238 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.129303 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.146435 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.222575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.284261 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.325764 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.340199 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.389730 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.395162 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.410557 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.413545 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.413845 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.415282 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.416238 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.421121 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.431721 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444325 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-catalog-content\") pod \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444388 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-utilities\") pod \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444424 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-catalog-content\") pod \"e1f81499-3c8f-40b6-bd99-344558565c77\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444450 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-utilities\") pod \"34532038-b143-4391-99f3-37275497f03e\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444472 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqm7x\" (UniqueName: \"kubernetes.io/projected/74ac561c-5afe-4308-814f-11bf3f93f4ac-kube-api-access-nqm7x\") pod \"74ac561c-5afe-4308-814f-11bf3f93f4ac\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444511 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-catalog-content\") pod \"74ac561c-5afe-4308-814f-11bf3f93f4ac\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444533 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-utilities\") pod \"74ac561c-5afe-4308-814f-11bf3f93f4ac\" (UID: \"74ac561c-5afe-4308-814f-11bf3f93f4ac\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444556 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcnhk\" (UniqueName: \"kubernetes.io/projected/3bb788e4-fad9-4416-9042-7a46d8ef83b3-kube-api-access-qcnhk\") pod \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444574 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-utilities\") pod \"e1f81499-3c8f-40b6-bd99-344558565c77\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444605 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-utilities\") pod \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444633 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56g7f\" (UniqueName: \"kubernetes.io/projected/34532038-b143-4391-99f3-37275497f03e-kube-api-access-56g7f\") pod \"34532038-b143-4391-99f3-37275497f03e\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444652 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-trusted-ca\") pod \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444672 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-catalog-content\") pod \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444713 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf96d\" (UniqueName: \"kubernetes.io/projected/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-kube-api-access-mf96d\") pod \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\" (UID: \"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444731 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhhjt\" (UniqueName: \"kubernetes.io/projected/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-kube-api-access-vhhjt\") pod \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444761 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-operator-metrics\") pod \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\" (UID: \"3bb788e4-fad9-4416-9042-7a46d8ef83b3\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444793 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m45d\" (UniqueName: \"kubernetes.io/projected/7a408baf-8e2f-438d-b77f-2abd317fe09f-kube-api-access-8m45d\") pod \"7a408baf-8e2f-438d-b77f-2abd317fe09f\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444813 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-catalog-content\") pod \"7a408baf-8e2f-438d-b77f-2abd317fe09f\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444851 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-catalog-content\") pod \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\" (UID: \"690b269f-3c5d-47b5-a11b-6c44dd6b1f95\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444867 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-utilities\") pod \"7a408baf-8e2f-438d-b77f-2abd317fe09f\" (UID: \"7a408baf-8e2f-438d-b77f-2abd317fe09f\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444889 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-utilities\") pod \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444905 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpldv\" (UniqueName: \"kubernetes.io/projected/a6b1d233-4df3-4960-abd3-c8bf11ca322b-kube-api-access-hpldv\") pod \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\" (UID: \"a6b1d233-4df3-4960-abd3-c8bf11ca322b\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444933 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbr5f\" (UniqueName: \"kubernetes.io/projected/e1f81499-3c8f-40b6-bd99-344558565c77-kube-api-access-vbr5f\") pod \"e1f81499-3c8f-40b6-bd99-344558565c77\" (UID: \"e1f81499-3c8f-40b6-bd99-344558565c77\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.444953 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-catalog-content\") pod \"34532038-b143-4391-99f3-37275497f03e\" (UID: \"34532038-b143-4391-99f3-37275497f03e\") " Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.446146 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3bb788e4-fad9-4416-9042-7a46d8ef83b3" (UID: "3bb788e4-fad9-4416-9042-7a46d8ef83b3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.447371 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-utilities" (OuterVolumeSpecName: "utilities") pod "74ac561c-5afe-4308-814f-11bf3f93f4ac" (UID: "74ac561c-5afe-4308-814f-11bf3f93f4ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.447801 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-utilities" (OuterVolumeSpecName: "utilities") pod "a6b1d233-4df3-4960-abd3-c8bf11ca322b" (UID: "a6b1d233-4df3-4960-abd3-c8bf11ca322b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.449027 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-utilities" (OuterVolumeSpecName: "utilities") pod "7a408baf-8e2f-438d-b77f-2abd317fe09f" (UID: "7a408baf-8e2f-438d-b77f-2abd317fe09f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.449235 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-utilities" (OuterVolumeSpecName: "utilities") pod "e1f81499-3c8f-40b6-bd99-344558565c77" (UID: "e1f81499-3c8f-40b6-bd99-344558565c77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.449821 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-utilities" (OuterVolumeSpecName: "utilities") pod "34532038-b143-4391-99f3-37275497f03e" (UID: "34532038-b143-4391-99f3-37275497f03e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.450248 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-utilities" (OuterVolumeSpecName: "utilities") pod "690b269f-3c5d-47b5-a11b-6c44dd6b1f95" (UID: "690b269f-3c5d-47b5-a11b-6c44dd6b1f95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.451477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-utilities" (OuterVolumeSpecName: "utilities") pod "9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" (UID: "9eace400-39bb-4f2a-ab2f-379a8fd3e8c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.454990 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f81499-3c8f-40b6-bd99-344558565c77-kube-api-access-vbr5f" (OuterVolumeSpecName: "kube-api-access-vbr5f") pod "e1f81499-3c8f-40b6-bd99-344558565c77" (UID: "e1f81499-3c8f-40b6-bd99-344558565c77"). InnerVolumeSpecName "kube-api-access-vbr5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.455150 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-kube-api-access-mf96d" (OuterVolumeSpecName: "kube-api-access-mf96d") pod "9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" (UID: "9eace400-39bb-4f2a-ab2f-379a8fd3e8c2"). InnerVolumeSpecName "kube-api-access-mf96d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.455169 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3bb788e4-fad9-4416-9042-7a46d8ef83b3" (UID: "3bb788e4-fad9-4416-9042-7a46d8ef83b3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.455937 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb788e4-fad9-4416-9042-7a46d8ef83b3-kube-api-access-qcnhk" (OuterVolumeSpecName: "kube-api-access-qcnhk") pod "3bb788e4-fad9-4416-9042-7a46d8ef83b3" (UID: "3bb788e4-fad9-4416-9042-7a46d8ef83b3"). InnerVolumeSpecName "kube-api-access-qcnhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.458068 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a408baf-8e2f-438d-b77f-2abd317fe09f-kube-api-access-8m45d" (OuterVolumeSpecName: "kube-api-access-8m45d") pod "7a408baf-8e2f-438d-b77f-2abd317fe09f" (UID: "7a408baf-8e2f-438d-b77f-2abd317fe09f"). InnerVolumeSpecName "kube-api-access-8m45d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.459146 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-kube-api-access-vhhjt" (OuterVolumeSpecName: "kube-api-access-vhhjt") pod "690b269f-3c5d-47b5-a11b-6c44dd6b1f95" (UID: "690b269f-3c5d-47b5-a11b-6c44dd6b1f95"). InnerVolumeSpecName "kube-api-access-vhhjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.465772 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.466904 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ac561c-5afe-4308-814f-11bf3f93f4ac-kube-api-access-nqm7x" (OuterVolumeSpecName: "kube-api-access-nqm7x") pod "74ac561c-5afe-4308-814f-11bf3f93f4ac" (UID: "74ac561c-5afe-4308-814f-11bf3f93f4ac"). InnerVolumeSpecName "kube-api-access-nqm7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.473475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34532038-b143-4391-99f3-37275497f03e-kube-api-access-56g7f" (OuterVolumeSpecName: "kube-api-access-56g7f") pod "34532038-b143-4391-99f3-37275497f03e" (UID: "34532038-b143-4391-99f3-37275497f03e"). InnerVolumeSpecName "kube-api-access-56g7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.473733 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b1d233-4df3-4960-abd3-c8bf11ca322b-kube-api-access-hpldv" (OuterVolumeSpecName: "kube-api-access-hpldv") pod "a6b1d233-4df3-4960-abd3-c8bf11ca322b" (UID: "a6b1d233-4df3-4960-abd3-c8bf11ca322b"). InnerVolumeSpecName "kube-api-access-hpldv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.484180 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74ac561c-5afe-4308-814f-11bf3f93f4ac" (UID: "74ac561c-5afe-4308-814f-11bf3f93f4ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.490409 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6b1d233-4df3-4960-abd3-c8bf11ca322b" (UID: "a6b1d233-4df3-4960-abd3-c8bf11ca322b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.546144 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.546416 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcnhk\" (UniqueName: \"kubernetes.io/projected/3bb788e4-fad9-4416-9042-7a46d8ef83b3-kube-api-access-qcnhk\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.546543 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.546623 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.546721 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56g7f\" (UniqueName: \"kubernetes.io/projected/34532038-b143-4391-99f3-37275497f03e-kube-api-access-56g7f\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.546799 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.546988 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf96d\" (UniqueName: \"kubernetes.io/projected/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-kube-api-access-mf96d\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547109 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhhjt\" (UniqueName: \"kubernetes.io/projected/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-kube-api-access-vhhjt\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547190 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3bb788e4-fad9-4416-9042-7a46d8ef83b3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547272 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m45d\" (UniqueName: \"kubernetes.io/projected/7a408baf-8e2f-438d-b77f-2abd317fe09f-kube-api-access-8m45d\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547351 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547425 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547505 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpldv\" (UniqueName: \"kubernetes.io/projected/a6b1d233-4df3-4960-abd3-c8bf11ca322b-kube-api-access-hpldv\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547672 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbr5f\" (UniqueName: \"kubernetes.io/projected/e1f81499-3c8f-40b6-bd99-344558565c77-kube-api-access-vbr5f\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547750 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1d233-4df3-4960-abd3-c8bf11ca322b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547834 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.547930 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.548005 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqm7x\" (UniqueName: \"kubernetes.io/projected/74ac561c-5afe-4308-814f-11bf3f93f4ac-kube-api-access-nqm7x\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.548109 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ac561c-5afe-4308-814f-11bf3f93f4ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.550843 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1f81499-3c8f-40b6-bd99-344558565c77" (UID: "e1f81499-3c8f-40b6-bd99-344558565c77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.551659 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a408baf-8e2f-438d-b77f-2abd317fe09f" (UID: "7a408baf-8e2f-438d-b77f-2abd317fe09f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.552578 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34532038-b143-4391-99f3-37275497f03e" (UID: "34532038-b143-4391-99f3-37275497f03e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.553295 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "690b269f-3c5d-47b5-a11b-6c44dd6b1f95" (UID: "690b269f-3c5d-47b5-a11b-6c44dd6b1f95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.612019 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" (UID: "9eace400-39bb-4f2a-ab2f-379a8fd3e8c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.615266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.649369 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a408baf-8e2f-438d-b77f-2abd317fe09f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.649414 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690b269f-3c5d-47b5-a11b-6c44dd6b1f95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.649428 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34532038-b143-4391-99f3-37275497f03e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.649439 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f81499-3c8f-40b6-bd99-344558565c77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.649450 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.773345 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 07:46:08 crc kubenswrapper[4705]: I0124 07:46:08.900668 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.013605 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.013643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h8gjk" event={"ID":"3bb788e4-fad9-4416-9042-7a46d8ef83b3","Type":"ContainerDied","Data":"857a0f42e0fc856d125e2ddb15532ac88b578dddeed392768d182f0c7ef69a99"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.013884 4705 scope.go:117] "RemoveContainer" containerID="0ebc6faa3a0b3d7dbd41ae1ccf8aadbbecde871135e49869f7d23000319f1c5b" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.018962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k26f" event={"ID":"690b269f-3c5d-47b5-a11b-6c44dd6b1f95","Type":"ContainerDied","Data":"2f8e6131d0792efed25a6b69d6b19f9f0fde735fdceaac46ec04392a9c66caf8"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.019094 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k26f" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.022821 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz2gs" event={"ID":"7a408baf-8e2f-438d-b77f-2abd317fe09f","Type":"ContainerDied","Data":"166512c0f8e629d187dc072e15f26656604ca7c5e311fcc49195ab97bf4b0354"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.022963 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz2gs" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.026495 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ffcg" event={"ID":"34532038-b143-4391-99f3-37275497f03e","Type":"ContainerDied","Data":"6ef71f1bc6cab50a9570168871c5866c2a6861cb83381cdd030118fdd14583bc"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.026677 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ffcg" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.033056 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pdt8p" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.033062 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdt8p" event={"ID":"a6b1d233-4df3-4960-abd3-c8bf11ca322b","Type":"ContainerDied","Data":"829302d55b953fff4568d337610ad7902bf612cef54d96cd52f9e9c8cb7c4bf7"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.035586 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmflx" event={"ID":"74ac561c-5afe-4308-814f-11bf3f93f4ac","Type":"ContainerDied","Data":"b79894bc8bace6e7dede67395fe723e88885a925c7e8756dd667b76b48b077d2"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.035872 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmflx" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.042155 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64v8j" event={"ID":"9eace400-39bb-4f2a-ab2f-379a8fd3e8c2","Type":"ContainerDied","Data":"cd3d462defd5da412a88d906c17334d8aa32a0981cbdb0f8ab8920505cb17ccb"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.042188 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64v8j" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.042589 4705 scope.go:117] "RemoveContainer" containerID="e9b60610faf8cb0dfa91bcfc6bc6e810a29808048c71a06dab178b904242cbe3" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.048545 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qgbp" event={"ID":"e1f81499-3c8f-40b6-bd99-344558565c77","Type":"ContainerDied","Data":"cf0fe8a76ecf068e111eea51590bb8c21095ffcb455c4482ca530a448d1fff78"} Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.048563 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qgbp" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.062276 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.077698 4705 scope.go:117] "RemoveContainer" containerID="53565f93ddf7b9091a305d0076a53d5c7664b6ddac1b344e16a8bf70ae4b0067" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.107293 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h8gjk"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.108846 4705 scope.go:117] "RemoveContainer" containerID="5605285eb04c617456491792796c25163cd88b1d7558e4f4beeeb5479caa3b2c" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.120128 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h8gjk"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.125611 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gz2gs"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.128250 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.131679 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gz2gs"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.135853 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k26f"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.135941 4705 scope.go:117] "RemoveContainer" containerID="a38c55b284891553899c2bde7fc073ad16914ece5c5b93b3d37dd2691203c034" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.144462 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4k26f"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.163189 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64v8j"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.163361 4705 scope.go:117] "RemoveContainer" containerID="8342a78d571d810594b9915ee0121d763ed997f754b4c9f35192c753144333f2" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.167258 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-64v8j"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.172437 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ffcg"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.178367 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9ffcg"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.183052 4705 scope.go:117] "RemoveContainer" containerID="fa2e50dd5dd9061c98e738a23e27571fb8e4199780677a9a3bb7f7cd36e0f88d" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.201295 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qgbp"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.205141 4705 scope.go:117] "RemoveContainer" containerID="7a68ddec5163e25591026a873d624b11f5a6f681a0ae5493d6e947de65b6f880" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.210007 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9qgbp"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.215588 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmflx"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.219147 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmflx"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.220933 4705 scope.go:117] "RemoveContainer" containerID="22b65463a8b3ea5d3cd5c8b09f1e8a69299d33fc391a25c9985b80354106f4e7" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.222002 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pdt8p"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.225472 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pdt8p"] Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.237659 4705 scope.go:117] "RemoveContainer" containerID="cfdf48fc7acde935ee114fcdb4b233be79879019e64d6759e7288484e4bfea8c" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.260599 4705 scope.go:117] "RemoveContainer" containerID="59a7b44cc571d89ebc9d625342a7e5eae37f2b14cc56e7a1f711e17fd711ae57" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.273680 4705 scope.go:117] "RemoveContainer" containerID="ccf2f28c8aa48fc157bef752661babfd42c2e633f56eb1b84869539aedcae6bd" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.286632 4705 scope.go:117] "RemoveContainer" containerID="5d1b64cfb3a4e012c3e0002ddf8ab321364abb7b1c649b570310b2cfe0c1b05d" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.297723 4705 scope.go:117] "RemoveContainer" containerID="fd8c8215603005aa0d8b622be0eced4fc7f63098b55fdd29c8fb068314521345" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.309826 4705 scope.go:117] "RemoveContainer" containerID="be1366f49dc6002add07bc8c11746f580d80b16c29f088cfe99727fdbef7cf7d" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.323967 4705 scope.go:117] "RemoveContainer" containerID="20044c78fdb33d68e095188e7d16475c95b9503974eeadfc3afd8f4954bd5162" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.338731 4705 scope.go:117] "RemoveContainer" containerID="8b3c126d86952e3d8018217e9b59406f4d562e5be64d27143c129828fd470763" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.350325 4705 scope.go:117] "RemoveContainer" containerID="9cde2e9b124ad99ad229b7688c3a6a715a611d28e69d543e397d33517e463366" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.365926 4705 scope.go:117] "RemoveContainer" containerID="e79e9606b191160959b6120dc23f05dc616ca99b33403d90e9f9f5ea388c8afe" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.379685 4705 scope.go:117] "RemoveContainer" containerID="dfaa7523274614e9b6ae7139d225a56acc53ca076b355aac7985664fe9cc9459" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.391773 4705 scope.go:117] "RemoveContainer" containerID="f78bb201e5fe689d30326a139eb8ebf3e8d1e3f28fdc67752f9aba84fcdb8713" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.424121 4705 scope.go:117] "RemoveContainer" containerID="76a3a1df75a4129552f10a18ef9d7ec5c1c1cdb8c394944962c348f59e0c40a9" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.494280 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.507574 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.600506 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34532038-b143-4391-99f3-37275497f03e" path="/var/lib/kubelet/pods/34532038-b143-4391-99f3-37275497f03e/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.601341 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" path="/var/lib/kubelet/pods/3bb788e4-fad9-4416-9042-7a46d8ef83b3/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.601928 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" path="/var/lib/kubelet/pods/690b269f-3c5d-47b5-a11b-6c44dd6b1f95/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.603118 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" path="/var/lib/kubelet/pods/74ac561c-5afe-4308-814f-11bf3f93f4ac/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.603814 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" path="/var/lib/kubelet/pods/7a408baf-8e2f-438d-b77f-2abd317fe09f/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.604979 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" path="/var/lib/kubelet/pods/9eace400-39bb-4f2a-ab2f-379a8fd3e8c2/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.605649 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" path="/var/lib/kubelet/pods/a6b1d233-4df3-4960-abd3-c8bf11ca322b/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.606322 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" path="/var/lib/kubelet/pods/e1f81499-3c8f-40b6-bd99-344558565c77/volumes" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.668286 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.673701 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.794600 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 07:46:09 crc kubenswrapper[4705]: I0124 07:46:09.908578 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.109871 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.129157 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.140944 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.380281 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.577691 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.730858 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.795491 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 07:46:10 crc kubenswrapper[4705]: I0124 07:46:10.845814 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 07:46:11 crc kubenswrapper[4705]: I0124 07:46:11.389952 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 07:46:11 crc kubenswrapper[4705]: I0124 07:46:11.426798 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 07:46:11 crc kubenswrapper[4705]: I0124 07:46:11.816394 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 07:46:11 crc kubenswrapper[4705]: I0124 07:46:11.896866 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 07:46:12 crc kubenswrapper[4705]: I0124 07:46:12.106160 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 07:46:15 crc kubenswrapper[4705]: I0124 07:46:15.992057 4705 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 07:46:15 crc kubenswrapper[4705]: I0124 07:46:15.992521 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4c88d21d1c897fea08dd3e80512a4243d36c3b7a171cd5a9676333ddd481de42" gracePeriod=5 Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.122189 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.122898 4705 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4c88d21d1c897fea08dd3e80512a4243d36c3b7a171cd5a9676333ddd481de42" exitCode=137 Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.594316 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.594398 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.769363 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.769455 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.769520 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.769537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.769585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.769815 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.770072 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.770153 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.770205 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.778144 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.871238 4705 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.871290 4705 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.871302 4705 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.871309 4705 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:21 crc kubenswrapper[4705]: I0124 07:46:21.871319 4705 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:22 crc kubenswrapper[4705]: I0124 07:46:22.130323 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 07:46:22 crc kubenswrapper[4705]: I0124 07:46:22.130414 4705 scope.go:117] "RemoveContainer" containerID="4c88d21d1c897fea08dd3e80512a4243d36c3b7a171cd5a9676333ddd481de42" Jan 24 07:46:22 crc kubenswrapper[4705]: I0124 07:46:22.130490 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 07:46:23 crc kubenswrapper[4705]: I0124 07:46:23.582731 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 24 07:46:36 crc kubenswrapper[4705]: I0124 07:46:36.208903 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 24 07:46:36 crc kubenswrapper[4705]: I0124 07:46:36.211079 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 07:46:36 crc kubenswrapper[4705]: I0124 07:46:36.211136 4705 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a737faf487d8621091285cbe967b5fadaf1b34eb2ad8bf743122d9dc5856b87d" exitCode=137 Jan 24 07:46:36 crc kubenswrapper[4705]: I0124 07:46:36.211173 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a737faf487d8621091285cbe967b5fadaf1b34eb2ad8bf743122d9dc5856b87d"} Jan 24 07:46:36 crc kubenswrapper[4705]: I0124 07:46:36.211215 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"578c37896b3614ed4037d2c7c451775970f2f14928ed6c79b4b9a8bb748dbba0"} Jan 24 07:46:36 crc kubenswrapper[4705]: I0124 07:46:36.211237 4705 scope.go:117] "RemoveContainer" containerID="006f427fd26c000cfcd729fe62ddb62d3710678bf889e44c49d727a0f6ab1166" Jan 24 07:46:37 crc kubenswrapper[4705]: I0124 07:46:37.217862 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 24 07:46:45 crc kubenswrapper[4705]: I0124 07:46:45.296142 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:46:45 crc kubenswrapper[4705]: I0124 07:46:45.300863 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:46:45 crc kubenswrapper[4705]: I0124 07:46:45.331309 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:46:46 crc kubenswrapper[4705]: I0124 07:46:46.268331 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.679110 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b76f947cf-swbj9"] Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.680711 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" podUID="a4148672-7a7b-43c5-a453-9ea6904bab91" containerName="controller-manager" containerID="cri-o://1b7e21c637ee8b9d112fb0e09e5c8b6596d25bbcfb98392de4abf977ee18e2d0" gracePeriod=30 Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687118 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-httl9"] Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687417 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687439 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687450 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687460 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687470 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687478 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687488 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687495 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687528 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687536 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687549 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687557 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687567 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687575 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687589 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" containerName="installer" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687598 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" containerName="installer" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687608 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687618 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687626 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687635 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687646 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687654 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687665 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687672 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687681 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687690 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687700 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687709 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687719 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687726 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687735 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687743 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687752 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687760 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687772 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687780 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="extract-content" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687790 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687797 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687807 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687833 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687842 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687849 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687858 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687867 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687879 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687887 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: E0124 07:46:54.687895 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.687902 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="extract-utilities" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688028 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eace400-39bb-4f2a-ab2f-379a8fd3e8c2" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688043 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b1d233-4df3-4960-abd3-c8bf11ca322b" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688055 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="690b269f-3c5d-47b5-a11b-6c44dd6b1f95" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688066 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="34532038-b143-4391-99f3-37275497f03e" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688076 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688084 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb788e4-fad9-4416-9042-7a46d8ef83b3" containerName="marketplace-operator" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688094 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ac561c-5afe-4308-814f-11bf3f93f4ac" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688106 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f81499-3c8f-40b6-bd99-344558565c77" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688114 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="63fd1206-c28b-4e05-94ab-8935afb05436" containerName="installer" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.688124 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a408baf-8e2f-438d-b77f-2abd317fe09f" containerName="registry-server" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.690628 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.696944 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.697096 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.697210 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.698255 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.706633 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-httl9"] Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.713274 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnmwt"] Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.716104 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.775969 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr28g\" (UniqueName: \"kubernetes.io/projected/568a6099-4783-45d9-9ea8-7c856a3ddd86-kube-api-access-wr28g\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.776059 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/568a6099-4783-45d9-9ea8-7c856a3ddd86-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.776101 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/568a6099-4783-45d9-9ea8-7c856a3ddd86-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.810920 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k"] Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.811142 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" podUID="fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" containerName="route-controller-manager" containerID="cri-o://5aa474d8375f6553144fd88b4baa56a55f5152b93e2a91183f25b77398c9b905" gracePeriod=30 Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.877467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr28g\" (UniqueName: \"kubernetes.io/projected/568a6099-4783-45d9-9ea8-7c856a3ddd86-kube-api-access-wr28g\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.877520 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/568a6099-4783-45d9-9ea8-7c856a3ddd86-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.877550 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/568a6099-4783-45d9-9ea8-7c856a3ddd86-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.879490 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/568a6099-4783-45d9-9ea8-7c856a3ddd86-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.898031 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/568a6099-4783-45d9-9ea8-7c856a3ddd86-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:54 crc kubenswrapper[4705]: I0124 07:46:54.954557 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr28g\" (UniqueName: \"kubernetes.io/projected/568a6099-4783-45d9-9ea8-7c856a3ddd86-kube-api-access-wr28g\") pod \"marketplace-operator-79b997595-httl9\" (UID: \"568a6099-4783-45d9-9ea8-7c856a3ddd86\") " pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.015325 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.358762 4705 generic.go:334] "Generic (PLEG): container finished" podID="fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" containerID="5aa474d8375f6553144fd88b4baa56a55f5152b93e2a91183f25b77398c9b905" exitCode=0 Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.358887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" event={"ID":"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429","Type":"ContainerDied","Data":"5aa474d8375f6553144fd88b4baa56a55f5152b93e2a91183f25b77398c9b905"} Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.360668 4705 generic.go:334] "Generic (PLEG): container finished" podID="a4148672-7a7b-43c5-a453-9ea6904bab91" containerID="1b7e21c637ee8b9d112fb0e09e5c8b6596d25bbcfb98392de4abf977ee18e2d0" exitCode=0 Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.360706 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" event={"ID":"a4148672-7a7b-43c5-a453-9ea6904bab91","Type":"ContainerDied","Data":"1b7e21c637ee8b9d112fb0e09e5c8b6596d25bbcfb98392de4abf977ee18e2d0"} Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.555741 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-httl9"] Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.850290 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.939973 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.965350 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4"] Jan 24 07:46:55 crc kubenswrapper[4705]: E0124 07:46:55.965594 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" containerName="route-controller-manager" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.965617 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" containerName="route-controller-manager" Jan 24 07:46:55 crc kubenswrapper[4705]: E0124 07:46:55.965636 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4148672-7a7b-43c5-a453-9ea6904bab91" containerName="controller-manager" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.965645 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4148672-7a7b-43c5-a453-9ea6904bab91" containerName="controller-manager" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.965758 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" containerName="route-controller-manager" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.965774 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4148672-7a7b-43c5-a453-9ea6904bab91" containerName="controller-manager" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.966214 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.984580 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4"] Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.989257 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-serving-cert\") pod \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.989341 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-client-ca\") pod \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.989423 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-config\") pod \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.989455 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqc5x\" (UniqueName: \"kubernetes.io/projected/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-kube-api-access-kqc5x\") pod \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\" (UID: \"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429\") " Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.990088 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-client-ca" (OuterVolumeSpecName: "client-ca") pod "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" (UID: "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.990279 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-config" (OuterVolumeSpecName: "config") pod "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" (UID: "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.995307 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-kube-api-access-kqc5x" (OuterVolumeSpecName: "kube-api-access-kqc5x") pod "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" (UID: "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429"). InnerVolumeSpecName "kube-api-access-kqc5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:55 crc kubenswrapper[4705]: I0124 07:46:55.996091 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" (UID: "fab3a70b-8e34-4cd8-8b34-c8c17ea9d429"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.090753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-client-ca\") pod \"a4148672-7a7b-43c5-a453-9ea6904bab91\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.090833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4148672-7a7b-43c5-a453-9ea6904bab91-serving-cert\") pod \"a4148672-7a7b-43c5-a453-9ea6904bab91\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.091483 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-proxy-ca-bundles\") pod \"a4148672-7a7b-43c5-a453-9ea6904bab91\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.091555 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvlnz\" (UniqueName: \"kubernetes.io/projected/a4148672-7a7b-43c5-a453-9ea6904bab91-kube-api-access-fvlnz\") pod \"a4148672-7a7b-43c5-a453-9ea6904bab91\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.091603 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-config\") pod \"a4148672-7a7b-43c5-a453-9ea6904bab91\" (UID: \"a4148672-7a7b-43c5-a453-9ea6904bab91\") " Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.091804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-config\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.091845 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a4148672-7a7b-43c5-a453-9ea6904bab91" (UID: "a4148672-7a7b-43c5-a453-9ea6904bab91"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.091843 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4148672-7a7b-43c5-a453-9ea6904bab91" (UID: "a4148672-7a7b-43c5-a453-9ea6904bab91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092026 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a84d617-0a85-4195-b05f-46ee52c1e295-serving-cert\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092068 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-client-ca\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092106 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/5a84d617-0a85-4195-b05f-46ee52c1e295-kube-api-access-4sx6q\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092246 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092265 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092279 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqc5x\" (UniqueName: \"kubernetes.io/projected/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-kube-api-access-kqc5x\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092321 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092350 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-config" (OuterVolumeSpecName: "config") pod "a4148672-7a7b-43c5-a453-9ea6904bab91" (UID: "a4148672-7a7b-43c5-a453-9ea6904bab91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092362 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.092416 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.093646 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4148672-7a7b-43c5-a453-9ea6904bab91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4148672-7a7b-43c5-a453-9ea6904bab91" (UID: "a4148672-7a7b-43c5-a453-9ea6904bab91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.093952 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4148672-7a7b-43c5-a453-9ea6904bab91-kube-api-access-fvlnz" (OuterVolumeSpecName: "kube-api-access-fvlnz") pod "a4148672-7a7b-43c5-a453-9ea6904bab91" (UID: "a4148672-7a7b-43c5-a453-9ea6904bab91"). InnerVolumeSpecName "kube-api-access-fvlnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.193880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-config\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.193964 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a84d617-0a85-4195-b05f-46ee52c1e295-serving-cert\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.194004 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-client-ca\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.194031 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/5a84d617-0a85-4195-b05f-46ee52c1e295-kube-api-access-4sx6q\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.194099 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4148672-7a7b-43c5-a453-9ea6904bab91-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.194111 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4148672-7a7b-43c5-a453-9ea6904bab91-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.194120 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvlnz\" (UniqueName: \"kubernetes.io/projected/a4148672-7a7b-43c5-a453-9ea6904bab91-kube-api-access-fvlnz\") on node \"crc\" DevicePath \"\"" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.195036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-client-ca\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.195131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-config\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.199222 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a84d617-0a85-4195-b05f-46ee52c1e295-serving-cert\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.209466 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/5a84d617-0a85-4195-b05f-46ee52c1e295-kube-api-access-4sx6q\") pod \"route-controller-manager-dbb5d498c-lvnl4\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.284709 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.370400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" event={"ID":"fab3a70b-8e34-4cd8-8b34-c8c17ea9d429","Type":"ContainerDied","Data":"3b23f6147d33ec22b865ebb04d14183d42872b03cc8e87a7d8cc59bea5c0fdd8"} Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.370763 4705 scope.go:117] "RemoveContainer" containerID="5aa474d8375f6553144fd88b4baa56a55f5152b93e2a91183f25b77398c9b905" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.370434 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.372174 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" event={"ID":"a4148672-7a7b-43c5-a453-9ea6904bab91","Type":"ContainerDied","Data":"fe648f97db6ad869d3d1d0cf56a3c08937bca90546654ca21668f60a57406bbe"} Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.372221 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b76f947cf-swbj9" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.375630 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" event={"ID":"568a6099-4783-45d9-9ea8-7c856a3ddd86","Type":"ContainerStarted","Data":"86aed870437d2a0734cd009a2af4d31581ee4440cb10439e58fddc430cfd61a8"} Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.375672 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" event={"ID":"568a6099-4783-45d9-9ea8-7c856a3ddd86","Type":"ContainerStarted","Data":"63150b3af8386f91a53d563b984e9cfdd90e314f29a44c00a19e61e9e00a32ae"} Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.379316 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.379496 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-httl9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.379533 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" podUID="568a6099-4783-45d9-9ea8-7c856a3ddd86" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.386763 4705 scope.go:117] "RemoveContainer" containerID="1b7e21c637ee8b9d112fb0e09e5c8b6596d25bbcfb98392de4abf977ee18e2d0" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.401068 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" podStartSLOduration=2.401044187 podStartE2EDuration="2.401044187s" podCreationTimestamp="2026-01-24 07:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:46:56.397893642 +0000 UTC m=+355.117766940" watchObservedRunningTime="2026-01-24 07:46:56.401044187 +0000 UTC m=+355.120917485" Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.415921 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b76f947cf-swbj9"] Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.420166 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b76f947cf-swbj9"] Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.440573 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k"] Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.446774 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744bc7f9db-8nz6k"] Jan 24 07:46:56 crc kubenswrapper[4705]: I0124 07:46:56.499437 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4"] Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.382598 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" event={"ID":"5a84d617-0a85-4195-b05f-46ee52c1e295","Type":"ContainerStarted","Data":"fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c"} Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.384017 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.384112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" event={"ID":"5a84d617-0a85-4195-b05f-46ee52c1e295","Type":"ContainerStarted","Data":"f8075a3016c002ae5ef6757fe4d0239e3708b2d62bb0400fca90b4cb9ceab383"} Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.387919 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-httl9" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.389810 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.415148 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" podStartSLOduration=3.415133181 podStartE2EDuration="3.415133181s" podCreationTimestamp="2026-01-24 07:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:46:57.412562751 +0000 UTC m=+356.132436039" watchObservedRunningTime="2026-01-24 07:46:57.415133181 +0000 UTC m=+356.135006469" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.583087 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4148672-7a7b-43c5-a453-9ea6904bab91" path="/var/lib/kubelet/pods/a4148672-7a7b-43c5-a453-9ea6904bab91/volumes" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.583784 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab3a70b-8e34-4cd8-8b34-c8c17ea9d429" path="/var/lib/kubelet/pods/fab3a70b-8e34-4cd8-8b34-c8c17ea9d429/volumes" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.964170 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-687cb74d44-dbrtk"] Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.964871 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.967189 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.967423 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.967931 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.968080 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.968105 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.969133 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.974036 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 07:46:57 crc kubenswrapper[4705]: I0124 07:46:57.980580 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-687cb74d44-dbrtk"] Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.131414 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvsn\" (UniqueName: \"kubernetes.io/projected/bd9429e6-8e55-4665-9624-1f1c77113fd6-kube-api-access-fpvsn\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.131693 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-config\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.131761 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9429e6-8e55-4665-9624-1f1c77113fd6-serving-cert\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.131868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-client-ca\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.131905 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-proxy-ca-bundles\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.233020 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-proxy-ca-bundles\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.233092 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpvsn\" (UniqueName: \"kubernetes.io/projected/bd9429e6-8e55-4665-9624-1f1c77113fd6-kube-api-access-fpvsn\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.233162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-config\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.233184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9429e6-8e55-4665-9624-1f1c77113fd6-serving-cert\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.233216 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-client-ca\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.234094 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-client-ca\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.234246 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-proxy-ca-bundles\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.234611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9429e6-8e55-4665-9624-1f1c77113fd6-config\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.240568 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9429e6-8e55-4665-9624-1f1c77113fd6-serving-cert\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.251376 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpvsn\" (UniqueName: \"kubernetes.io/projected/bd9429e6-8e55-4665-9624-1f1c77113fd6-kube-api-access-fpvsn\") pod \"controller-manager-687cb74d44-dbrtk\" (UID: \"bd9429e6-8e55-4665-9624-1f1c77113fd6\") " pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.332341 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:58 crc kubenswrapper[4705]: I0124 07:46:58.651762 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-687cb74d44-dbrtk"] Jan 24 07:46:58 crc kubenswrapper[4705]: W0124 07:46:58.664447 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd9429e6_8e55_4665_9624_1f1c77113fd6.slice/crio-e0dce3b1e8af92c4c3a84fbacbfc0e28c724a994559d0bfaf92a35bba4ca2424 WatchSource:0}: Error finding container e0dce3b1e8af92c4c3a84fbacbfc0e28c724a994559d0bfaf92a35bba4ca2424: Status 404 returned error can't find the container with id e0dce3b1e8af92c4c3a84fbacbfc0e28c724a994559d0bfaf92a35bba4ca2424 Jan 24 07:46:59 crc kubenswrapper[4705]: I0124 07:46:59.397570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" event={"ID":"bd9429e6-8e55-4665-9624-1f1c77113fd6","Type":"ContainerStarted","Data":"b4140a18dd83a5a3de145c34e840b4309677e5803c177e269019a8a5069e8fb3"} Jan 24 07:46:59 crc kubenswrapper[4705]: I0124 07:46:59.397611 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" event={"ID":"bd9429e6-8e55-4665-9624-1f1c77113fd6","Type":"ContainerStarted","Data":"e0dce3b1e8af92c4c3a84fbacbfc0e28c724a994559d0bfaf92a35bba4ca2424"} Jan 24 07:46:59 crc kubenswrapper[4705]: I0124 07:46:59.398364 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:59 crc kubenswrapper[4705]: I0124 07:46:59.407677 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" Jan 24 07:46:59 crc kubenswrapper[4705]: I0124 07:46:59.449810 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-687cb74d44-dbrtk" podStartSLOduration=5.449789635 podStartE2EDuration="5.449789635s" podCreationTimestamp="2026-01-24 07:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:46:59.447932445 +0000 UTC m=+358.167805723" watchObservedRunningTime="2026-01-24 07:46:59.449789635 +0000 UTC m=+358.169662933" Jan 24 07:47:07 crc kubenswrapper[4705]: I0124 07:47:07.071442 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:47:07 crc kubenswrapper[4705]: I0124 07:47:07.072918 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.376146 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8zmsq"] Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.377742 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.581123 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8zmsq"] Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661204 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00057374-4826-4163-953b-a19df7f436e2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661259 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-bound-sa-token\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661445 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00057374-4826-4163-953b-a19df7f436e2-registry-certificates\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00057374-4826-4163-953b-a19df7f436e2-trusted-ca\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661550 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661599 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmq96\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-kube-api-access-cmq96\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661621 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-registry-tls\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.661644 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00057374-4826-4163-953b-a19df7f436e2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.682997 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.762685 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00057374-4826-4163-953b-a19df7f436e2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.762749 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00057374-4826-4163-953b-a19df7f436e2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.762778 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-bound-sa-token\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.762854 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00057374-4826-4163-953b-a19df7f436e2-registry-certificates\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.762875 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00057374-4826-4163-953b-a19df7f436e2-trusted-ca\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.762909 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmq96\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-kube-api-access-cmq96\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.762933 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-registry-tls\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.763336 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00057374-4826-4163-953b-a19df7f436e2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.764631 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00057374-4826-4163-953b-a19df7f436e2-registry-certificates\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.764947 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00057374-4826-4163-953b-a19df7f436e2-trusted-ca\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.768736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-registry-tls\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.770552 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00057374-4826-4163-953b-a19df7f436e2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.778633 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-bound-sa-token\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.778677 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmq96\" (UniqueName: \"kubernetes.io/projected/00057374-4826-4163-953b-a19df7f436e2-kube-api-access-cmq96\") pod \"image-registry-66df7c8f76-8zmsq\" (UID: \"00057374-4826-4163-953b-a19df7f436e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:12 crc kubenswrapper[4705]: I0124 07:47:12.884211 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:13 crc kubenswrapper[4705]: I0124 07:47:13.381320 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8zmsq"] Jan 24 07:47:13 crc kubenswrapper[4705]: W0124 07:47:13.388981 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00057374_4826_4163_953b_a19df7f436e2.slice/crio-93fb18762b911c9aa2a6d25e7d8c03ae2815a03c7df579f0a1d23180fe100dde WatchSource:0}: Error finding container 93fb18762b911c9aa2a6d25e7d8c03ae2815a03c7df579f0a1d23180fe100dde: Status 404 returned error can't find the container with id 93fb18762b911c9aa2a6d25e7d8c03ae2815a03c7df579f0a1d23180fe100dde Jan 24 07:47:13 crc kubenswrapper[4705]: I0124 07:47:13.572735 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" event={"ID":"00057374-4826-4163-953b-a19df7f436e2","Type":"ContainerStarted","Data":"03a7a4d03eb6dce949dd343837fd267f45ddd7a5b9ff554da889bffbc62db81e"} Jan 24 07:47:13 crc kubenswrapper[4705]: I0124 07:47:13.572779 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" event={"ID":"00057374-4826-4163-953b-a19df7f436e2","Type":"ContainerStarted","Data":"93fb18762b911c9aa2a6d25e7d8c03ae2815a03c7df579f0a1d23180fe100dde"} Jan 24 07:47:13 crc kubenswrapper[4705]: I0124 07:47:13.573805 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:13 crc kubenswrapper[4705]: I0124 07:47:13.596812 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" podStartSLOduration=1.5967948170000001 podStartE2EDuration="1.596794817s" podCreationTimestamp="2026-01-24 07:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:47:13.594561246 +0000 UTC m=+372.314434534" watchObservedRunningTime="2026-01-24 07:47:13.596794817 +0000 UTC m=+372.316668105" Jan 24 07:47:19 crc kubenswrapper[4705]: I0124 07:47:19.783633 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" containerName="oauth-openshift" containerID="cri-o://b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13" gracePeriod=15 Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.302359 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.340536 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr"] Jan 24 07:47:20 crc kubenswrapper[4705]: E0124 07:47:20.340975 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" containerName="oauth-openshift" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.341006 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" containerName="oauth-openshift" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.341148 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" containerName="oauth-openshift" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.341705 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.352345 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr"] Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473014 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-serving-cert\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473077 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-router-certs\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473116 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-ocp-branding-template\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473154 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-cliconfig\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473184 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-service-ca\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473208 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90ec9237-0f8d-4641-8e07-7fb662297324-audit-dir\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473239 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-provider-selection\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473261 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-audit-policies\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473285 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-trusted-ca-bundle\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473324 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-error\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473344 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-login\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473370 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-session\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473406 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-idp-0-file-data\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473434 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm45f\" (UniqueName: \"kubernetes.io/projected/90ec9237-0f8d-4641-8e07-7fb662297324-kube-api-access-zm45f\") pod \"90ec9237-0f8d-4641-8e07-7fb662297324\" (UID: \"90ec9237-0f8d-4641-8e07-7fb662297324\") " Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-audit-dir\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473702 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473730 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-login\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473798 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473844 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473876 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-session\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473897 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-audit-policies\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473966 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npk8s\" (UniqueName: \"kubernetes.io/projected/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-kube-api-access-npk8s\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.473997 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.474024 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-error\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.474057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.474673 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ec9237-0f8d-4641-8e07-7fb662297324-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.474766 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.475420 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.476234 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.477251 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.489986 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4ffm5"] Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.491536 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.493534 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.493967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.494394 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.494768 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.494812 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.495616 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.494891 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.495753 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.497138 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ec9237-0f8d-4641-8e07-7fb662297324-kube-api-access-zm45f" (OuterVolumeSpecName: "kube-api-access-zm45f") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "kube-api-access-zm45f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.727220 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "90ec9237-0f8d-4641-8e07-7fb662297324" (UID: "90ec9237-0f8d-4641-8e07-7fb662297324"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.727759 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-audit-dir\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.727950 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xggx\" (UniqueName: \"kubernetes.io/projected/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-kube-api-access-7xggx\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.727977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728000 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728014 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-login\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728034 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-catalog-content\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728071 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728097 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728119 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-session\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728143 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728189 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-audit-policies\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728209 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npk8s\" (UniqueName: \"kubernetes.io/projected/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-kube-api-access-npk8s\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728230 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-error\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728269 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-utilities\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728287 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728346 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728358 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728368 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm45f\" (UniqueName: \"kubernetes.io/projected/90ec9237-0f8d-4641-8e07-7fb662297324-kube-api-access-zm45f\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728379 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728389 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728400 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728410 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728419 4705 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90ec9237-0f8d-4641-8e07-7fb662297324-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728428 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728438 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728447 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728456 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728466 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.728475 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90ec9237-0f8d-4641-8e07-7fb662297324-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.729368 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.729574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-audit-policies\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.730587 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.732551 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-audit-dir\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.736138 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.736470 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.738138 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.738378 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.738470 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.738496 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" event={"ID":"90ec9237-0f8d-4641-8e07-7fb662297324","Type":"ContainerDied","Data":"b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13"} Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.738550 4705 scope.go:117] "RemoveContainer" containerID="b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.739084 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.740582 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.742672 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ffm5"] Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.743196 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-error\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.743582 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-user-template-login\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.759469 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-v4-0-config-system-session\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.738349 4705 generic.go:334] "Generic (PLEG): container finished" podID="90ec9237-0f8d-4641-8e07-7fb662297324" containerID="b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13" exitCode=0 Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.779945 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnmwt" event={"ID":"90ec9237-0f8d-4641-8e07-7fb662297324","Type":"ContainerDied","Data":"c485c137813bb676013bc0fdc4b54acfe594c7be5157b121c4ac1d4be24aa380"} Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.780107 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4rlgc"] Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.782611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npk8s\" (UniqueName: \"kubernetes.io/projected/69f380f7-9eef-4cc7-a7d4-05e9be0518d1-kube-api-access-npk8s\") pod \"oauth-openshift-7dcb57cccd-p5qrr\" (UID: \"69f380f7-9eef-4cc7-a7d4-05e9be0518d1\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.802427 4705 scope.go:117] "RemoveContainer" containerID="b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13" Jan 24 07:47:20 crc kubenswrapper[4705]: E0124 07:47:20.803030 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13\": container with ID starting with b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13 not found: ID does not exist" containerID="b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.803076 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13"} err="failed to get container status \"b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13\": rpc error: code = NotFound desc = could not find container \"b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13\": container with ID starting with b7c78c609ed0b29ff4a63d63179fdd29f1e1963c40aeb5d3399a395fa76c5b13 not found: ID does not exist" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.811155 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rlgc"] Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.811329 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.819354 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.834475 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2681beb5-9536-4c9d-9221-18da1c8e244b-catalog-content\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.834546 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2681beb5-9536-4c9d-9221-18da1c8e244b-utilities\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.834583 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-utilities\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.834636 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xggx\" (UniqueName: \"kubernetes.io/projected/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-kube-api-access-7xggx\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.834657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-catalog-content\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.834686 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sss64\" (UniqueName: \"kubernetes.io/projected/2681beb5-9536-4c9d-9221-18da1c8e244b-kube-api-access-sss64\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.840286 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-utilities\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.841326 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnmwt"] Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.845032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-catalog-content\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.871690 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnmwt"] Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.876560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xggx\" (UniqueName: \"kubernetes.io/projected/d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea-kube-api-access-7xggx\") pod \"redhat-marketplace-4ffm5\" (UID: \"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea\") " pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.935840 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2681beb5-9536-4c9d-9221-18da1c8e244b-utilities\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.935955 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sss64\" (UniqueName: \"kubernetes.io/projected/2681beb5-9536-4c9d-9221-18da1c8e244b-kube-api-access-sss64\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.936008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2681beb5-9536-4c9d-9221-18da1c8e244b-catalog-content\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.936566 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2681beb5-9536-4c9d-9221-18da1c8e244b-catalog-content\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.937117 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2681beb5-9536-4c9d-9221-18da1c8e244b-utilities\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.953388 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sss64\" (UniqueName: \"kubernetes.io/projected/2681beb5-9536-4c9d-9221-18da1c8e244b-kube-api-access-sss64\") pod \"redhat-operators-4rlgc\" (UID: \"2681beb5-9536-4c9d-9221-18da1c8e244b\") " pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:20 crc kubenswrapper[4705]: I0124 07:47:20.971943 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.240950 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.240979 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.543930 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr"] Jan 24 07:47:21 crc kubenswrapper[4705]: W0124 07:47:21.578382 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69f380f7_9eef_4cc7_a7d4_05e9be0518d1.slice/crio-5d7a7ad5a78c33070b616fd37f0442c56a591078da5acfe2ccee76661a3f7894 WatchSource:0}: Error finding container 5d7a7ad5a78c33070b616fd37f0442c56a591078da5acfe2ccee76661a3f7894: Status 404 returned error can't find the container with id 5d7a7ad5a78c33070b616fd37f0442c56a591078da5acfe2ccee76661a3f7894 Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.583377 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ec9237-0f8d-4641-8e07-7fb662297324" path="/var/lib/kubelet/pods/90ec9237-0f8d-4641-8e07-7fb662297324/volumes" Jan 24 07:47:21 crc kubenswrapper[4705]: W0124 07:47:21.612337 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1c9b3f7_c198_42c1_80b8_0b8245e1f3ea.slice/crio-68358cc06478c983e0156891b3f2eb2c003ca077ecd6a24dea66beb8ff4db27d WatchSource:0}: Error finding container 68358cc06478c983e0156891b3f2eb2c003ca077ecd6a24dea66beb8ff4db27d: Status 404 returned error can't find the container with id 68358cc06478c983e0156891b3f2eb2c003ca077ecd6a24dea66beb8ff4db27d Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.614443 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ffm5"] Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.712645 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rlgc"] Jan 24 07:47:21 crc kubenswrapper[4705]: W0124 07:47:21.718109 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2681beb5_9536_4c9d_9221_18da1c8e244b.slice/crio-c2f265360f9cb357ac68aa8f23dda57b57267194b4081b971c9f56bb51f2cb1a WatchSource:0}: Error finding container c2f265360f9cb357ac68aa8f23dda57b57267194b4081b971c9f56bb51f2cb1a: Status 404 returned error can't find the container with id c2f265360f9cb357ac68aa8f23dda57b57267194b4081b971c9f56bb51f2cb1a Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.785476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" event={"ID":"69f380f7-9eef-4cc7-a7d4-05e9be0518d1","Type":"ContainerStarted","Data":"5d7a7ad5a78c33070b616fd37f0442c56a591078da5acfe2ccee76661a3f7894"} Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.787303 4705 generic.go:334] "Generic (PLEG): container finished" podID="d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea" containerID="d34e787be37747142d7102ca02ed470f3c27b9a604cd62768b7a2dc4d2c61c90" exitCode=0 Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.787339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ffm5" event={"ID":"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea","Type":"ContainerDied","Data":"d34e787be37747142d7102ca02ed470f3c27b9a604cd62768b7a2dc4d2c61c90"} Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.787367 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ffm5" event={"ID":"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea","Type":"ContainerStarted","Data":"68358cc06478c983e0156891b3f2eb2c003ca077ecd6a24dea66beb8ff4db27d"} Jan 24 07:47:21 crc kubenswrapper[4705]: I0124 07:47:21.788786 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rlgc" event={"ID":"2681beb5-9536-4c9d-9221-18da1c8e244b","Type":"ContainerStarted","Data":"c2f265360f9cb357ac68aa8f23dda57b57267194b4081b971c9f56bb51f2cb1a"} Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.796380 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" event={"ID":"69f380f7-9eef-4cc7-a7d4-05e9be0518d1","Type":"ContainerStarted","Data":"a26082ab74905ab9d4fa84a8f8eebc6ef7c44e57c4557b1620f0498d27067156"} Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.796759 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.797802 4705 generic.go:334] "Generic (PLEG): container finished" podID="d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea" containerID="b127c9aed55f4b4433073bc51d40760a098449dfb90e81fa1c948dc359437f85" exitCode=0 Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.797886 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ffm5" event={"ID":"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea","Type":"ContainerDied","Data":"b127c9aed55f4b4433073bc51d40760a098449dfb90e81fa1c948dc359437f85"} Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.799518 4705 generic.go:334] "Generic (PLEG): container finished" podID="2681beb5-9536-4c9d-9221-18da1c8e244b" containerID="eb9cdf9e8092963e0bf595c354df40cfdf058ea4418f2015a4148c4ba46b3556" exitCode=0 Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.799590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rlgc" event={"ID":"2681beb5-9536-4c9d-9221-18da1c8e244b","Type":"ContainerDied","Data":"eb9cdf9e8092963e0bf595c354df40cfdf058ea4418f2015a4148c4ba46b3556"} Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.804112 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.817778 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7dcb57cccd-p5qrr" podStartSLOduration=28.817759312 podStartE2EDuration="28.817759312s" podCreationTimestamp="2026-01-24 07:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:47:22.815208723 +0000 UTC m=+381.535082011" watchObservedRunningTime="2026-01-24 07:47:22.817759312 +0000 UTC m=+381.537632600" Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.888550 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jxk9r"] Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.890220 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.892177 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 07:47:22 crc kubenswrapper[4705]: I0124 07:47:22.912843 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxk9r"] Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.288882 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rt7\" (UniqueName: \"kubernetes.io/projected/06f954e9-b37e-4822-b132-331764c6f9ac-kube-api-access-q9rt7\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.289368 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-catalog-content\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.289439 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-utilities\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.324716 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rmh7t"] Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.335377 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.341446 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.348853 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmh7t"] Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.393014 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-catalog-content\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.393064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-utilities\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.393118 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9rt7\" (UniqueName: \"kubernetes.io/projected/06f954e9-b37e-4822-b132-331764c6f9ac-kube-api-access-q9rt7\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.393888 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-catalog-content\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.394446 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-utilities\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.419254 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9rt7\" (UniqueName: \"kubernetes.io/projected/06f954e9-b37e-4822-b132-331764c6f9ac-kube-api-access-q9rt7\") pod \"community-operators-jxk9r\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.495664 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrt5k\" (UniqueName: \"kubernetes.io/projected/2fbcc45f-578c-43e4-9351-4b30d72d28f9-kube-api-access-zrt5k\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.495761 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-catalog-content\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.495791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-utilities\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.510430 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.617401 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrt5k\" (UniqueName: \"kubernetes.io/projected/2fbcc45f-578c-43e4-9351-4b30d72d28f9-kube-api-access-zrt5k\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.617455 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-catalog-content\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.617475 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-utilities\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.617949 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-utilities\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.618146 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-catalog-content\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.640219 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrt5k\" (UniqueName: \"kubernetes.io/projected/2fbcc45f-578c-43e4-9351-4b30d72d28f9-kube-api-access-zrt5k\") pod \"certified-operators-rmh7t\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.659463 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.809539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ffm5" event={"ID":"d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea","Type":"ContainerStarted","Data":"1fb721d0619a748b0f6721745122d9b61988cf123aa6ccec18185ad6454acf4e"} Jan 24 07:47:23 crc kubenswrapper[4705]: I0124 07:47:23.840256 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4ffm5" podStartSLOduration=2.236322463 podStartE2EDuration="3.840233654s" podCreationTimestamp="2026-01-24 07:47:20 +0000 UTC" firstStartedPulling="2026-01-24 07:47:21.789161956 +0000 UTC m=+380.509035244" lastFinishedPulling="2026-01-24 07:47:23.393073147 +0000 UTC m=+382.112946435" observedRunningTime="2026-01-24 07:47:23.833260245 +0000 UTC m=+382.553133553" watchObservedRunningTime="2026-01-24 07:47:23.840233654 +0000 UTC m=+382.560106952" Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.112463 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmh7t"] Jan 24 07:47:24 crc kubenswrapper[4705]: W0124 07:47:24.117634 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fbcc45f_578c_43e4_9351_4b30d72d28f9.slice/crio-41183928698f2aab69ca463f7c2a2632bf923b1dec13992de3286b98a5547b65 WatchSource:0}: Error finding container 41183928698f2aab69ca463f7c2a2632bf923b1dec13992de3286b98a5547b65: Status 404 returned error can't find the container with id 41183928698f2aab69ca463f7c2a2632bf923b1dec13992de3286b98a5547b65 Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.194028 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxk9r"] Jan 24 07:47:24 crc kubenswrapper[4705]: W0124 07:47:24.199858 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06f954e9_b37e_4822_b132_331764c6f9ac.slice/crio-012fbaa5fc31a8dee83488f01b612eddd637d4adf985e4f880c02fd29f2acd59 WatchSource:0}: Error finding container 012fbaa5fc31a8dee83488f01b612eddd637d4adf985e4f880c02fd29f2acd59: Status 404 returned error can't find the container with id 012fbaa5fc31a8dee83488f01b612eddd637d4adf985e4f880c02fd29f2acd59 Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.816897 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rlgc" event={"ID":"2681beb5-9536-4c9d-9221-18da1c8e244b","Type":"ContainerStarted","Data":"6ba76566651f15e4d35c968e616f5e70ad8dce20616e91554dac5bf1a4c8db51"} Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.818958 4705 generic.go:334] "Generic (PLEG): container finished" podID="06f954e9-b37e-4822-b132-331764c6f9ac" containerID="fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72" exitCode=0 Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.819061 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk9r" event={"ID":"06f954e9-b37e-4822-b132-331764c6f9ac","Type":"ContainerDied","Data":"fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72"} Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.819101 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk9r" event={"ID":"06f954e9-b37e-4822-b132-331764c6f9ac","Type":"ContainerStarted","Data":"012fbaa5fc31a8dee83488f01b612eddd637d4adf985e4f880c02fd29f2acd59"} Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.821285 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerID="9dfdcf7b85c6530a1bcb2c331fe6645001aba6c1c09a40910b40e3ff6ddf44da" exitCode=0 Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.821388 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmh7t" event={"ID":"2fbcc45f-578c-43e4-9351-4b30d72d28f9","Type":"ContainerDied","Data":"9dfdcf7b85c6530a1bcb2c331fe6645001aba6c1c09a40910b40e3ff6ddf44da"} Jan 24 07:47:24 crc kubenswrapper[4705]: I0124 07:47:24.821441 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmh7t" event={"ID":"2fbcc45f-578c-43e4-9351-4b30d72d28f9","Type":"ContainerStarted","Data":"41183928698f2aab69ca463f7c2a2632bf923b1dec13992de3286b98a5547b65"} Jan 24 07:47:25 crc kubenswrapper[4705]: I0124 07:47:25.828609 4705 generic.go:334] "Generic (PLEG): container finished" podID="06f954e9-b37e-4822-b132-331764c6f9ac" containerID="366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f" exitCode=0 Jan 24 07:47:25 crc kubenswrapper[4705]: I0124 07:47:25.828766 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk9r" event={"ID":"06f954e9-b37e-4822-b132-331764c6f9ac","Type":"ContainerDied","Data":"366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f"} Jan 24 07:47:25 crc kubenswrapper[4705]: I0124 07:47:25.832482 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmh7t" event={"ID":"2fbcc45f-578c-43e4-9351-4b30d72d28f9","Type":"ContainerStarted","Data":"e48c0e72ec4d581242d7b99dca1b90761607d7c14741fc54c495d9871bae535c"} Jan 24 07:47:25 crc kubenswrapper[4705]: I0124 07:47:25.834841 4705 generic.go:334] "Generic (PLEG): container finished" podID="2681beb5-9536-4c9d-9221-18da1c8e244b" containerID="6ba76566651f15e4d35c968e616f5e70ad8dce20616e91554dac5bf1a4c8db51" exitCode=0 Jan 24 07:47:25 crc kubenswrapper[4705]: I0124 07:47:25.834893 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rlgc" event={"ID":"2681beb5-9536-4c9d-9221-18da1c8e244b","Type":"ContainerDied","Data":"6ba76566651f15e4d35c968e616f5e70ad8dce20616e91554dac5bf1a4c8db51"} Jan 24 07:47:26 crc kubenswrapper[4705]: I0124 07:47:26.844358 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rlgc" event={"ID":"2681beb5-9536-4c9d-9221-18da1c8e244b","Type":"ContainerStarted","Data":"c16f138ace77a9492ee574bb7160d8efb608c98608c5c66aa69962d385933dde"} Jan 24 07:47:26 crc kubenswrapper[4705]: I0124 07:47:26.846927 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk9r" event={"ID":"06f954e9-b37e-4822-b132-331764c6f9ac","Type":"ContainerStarted","Data":"584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759"} Jan 24 07:47:26 crc kubenswrapper[4705]: I0124 07:47:26.848680 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerID="e48c0e72ec4d581242d7b99dca1b90761607d7c14741fc54c495d9871bae535c" exitCode=0 Jan 24 07:47:26 crc kubenswrapper[4705]: I0124 07:47:26.848726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmh7t" event={"ID":"2fbcc45f-578c-43e4-9351-4b30d72d28f9","Type":"ContainerDied","Data":"e48c0e72ec4d581242d7b99dca1b90761607d7c14741fc54c495d9871bae535c"} Jan 24 07:47:26 crc kubenswrapper[4705]: I0124 07:47:26.869202 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4rlgc" podStartSLOduration=3.325927899 podStartE2EDuration="6.869178786s" podCreationTimestamp="2026-01-24 07:47:20 +0000 UTC" firstStartedPulling="2026-01-24 07:47:22.802048397 +0000 UTC m=+381.521921685" lastFinishedPulling="2026-01-24 07:47:26.345299284 +0000 UTC m=+385.065172572" observedRunningTime="2026-01-24 07:47:26.864293564 +0000 UTC m=+385.584166852" watchObservedRunningTime="2026-01-24 07:47:26.869178786 +0000 UTC m=+385.589052074" Jan 24 07:47:26 crc kubenswrapper[4705]: I0124 07:47:26.906513 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jxk9r" podStartSLOduration=3.447785584 podStartE2EDuration="4.906492316s" podCreationTimestamp="2026-01-24 07:47:22 +0000 UTC" firstStartedPulling="2026-01-24 07:47:24.820067972 +0000 UTC m=+383.539941270" lastFinishedPulling="2026-01-24 07:47:26.278774714 +0000 UTC m=+384.998648002" observedRunningTime="2026-01-24 07:47:26.900151444 +0000 UTC m=+385.620024762" watchObservedRunningTime="2026-01-24 07:47:26.906492316 +0000 UTC m=+385.626365614" Jan 24 07:47:28 crc kubenswrapper[4705]: I0124 07:47:28.117198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmh7t" event={"ID":"2fbcc45f-578c-43e4-9351-4b30d72d28f9","Type":"ContainerStarted","Data":"658e49e3ac62c9d2c4d66fcd75914d3190d305d3d457fc935c49cfd4b9381e10"} Jan 24 07:47:28 crc kubenswrapper[4705]: I0124 07:47:28.141537 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rmh7t" podStartSLOduration=2.500044968 podStartE2EDuration="5.141515208s" podCreationTimestamp="2026-01-24 07:47:23 +0000 UTC" firstStartedPulling="2026-01-24 07:47:24.823993558 +0000 UTC m=+383.543866846" lastFinishedPulling="2026-01-24 07:47:27.465463798 +0000 UTC m=+386.185337086" observedRunningTime="2026-01-24 07:47:28.135499885 +0000 UTC m=+386.855373173" watchObservedRunningTime="2026-01-24 07:47:28.141515208 +0000 UTC m=+386.861388496" Jan 24 07:47:31 crc kubenswrapper[4705]: I0124 07:47:31.285562 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:31 crc kubenswrapper[4705]: I0124 07:47:31.285615 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:31 crc kubenswrapper[4705]: I0124 07:47:31.285644 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:31 crc kubenswrapper[4705]: I0124 07:47:31.286696 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:31 crc kubenswrapper[4705]: I0124 07:47:31.344044 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:32 crc kubenswrapper[4705]: I0124 07:47:32.330736 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4rlgc" podUID="2681beb5-9536-4c9d-9221-18da1c8e244b" containerName="registry-server" probeResult="failure" output=< Jan 24 07:47:32 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Jan 24 07:47:32 crc kubenswrapper[4705]: > Jan 24 07:47:32 crc kubenswrapper[4705]: I0124 07:47:32.337019 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4ffm5" Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.169925 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8zmsq" Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.233722 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-m7gzc"] Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.511443 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.511517 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.554698 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.660549 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.660610 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:33 crc kubenswrapper[4705]: I0124 07:47:33.699398 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:34 crc kubenswrapper[4705]: I0124 07:47:34.359091 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 07:47:34 crc kubenswrapper[4705]: I0124 07:47:34.382498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 07:47:37 crc kubenswrapper[4705]: I0124 07:47:37.154834 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:47:37 crc kubenswrapper[4705]: I0124 07:47:37.154883 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:47:41 crc kubenswrapper[4705]: I0124 07:47:41.302708 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:41 crc kubenswrapper[4705]: I0124 07:47:41.337955 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4rlgc" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.130622 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4"] Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.131142 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" podUID="5a84d617-0a85-4195-b05f-46ee52c1e295" containerName="route-controller-manager" containerID="cri-o://fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c" gracePeriod=30 Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.533933 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.606443 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-config\") pod \"5a84d617-0a85-4195-b05f-46ee52c1e295\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.606532 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/5a84d617-0a85-4195-b05f-46ee52c1e295-kube-api-access-4sx6q\") pod \"5a84d617-0a85-4195-b05f-46ee52c1e295\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.606563 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a84d617-0a85-4195-b05f-46ee52c1e295-serving-cert\") pod \"5a84d617-0a85-4195-b05f-46ee52c1e295\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.606582 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-client-ca\") pod \"5a84d617-0a85-4195-b05f-46ee52c1e295\" (UID: \"5a84d617-0a85-4195-b05f-46ee52c1e295\") " Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.607269 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-config" (OuterVolumeSpecName: "config") pod "5a84d617-0a85-4195-b05f-46ee52c1e295" (UID: "5a84d617-0a85-4195-b05f-46ee52c1e295"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.607407 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-client-ca" (OuterVolumeSpecName: "client-ca") pod "5a84d617-0a85-4195-b05f-46ee52c1e295" (UID: "5a84d617-0a85-4195-b05f-46ee52c1e295"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.607925 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.607951 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a84d617-0a85-4195-b05f-46ee52c1e295-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.611671 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a84d617-0a85-4195-b05f-46ee52c1e295-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5a84d617-0a85-4195-b05f-46ee52c1e295" (UID: "5a84d617-0a85-4195-b05f-46ee52c1e295"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.612152 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a84d617-0a85-4195-b05f-46ee52c1e295-kube-api-access-4sx6q" (OuterVolumeSpecName: "kube-api-access-4sx6q") pod "5a84d617-0a85-4195-b05f-46ee52c1e295" (UID: "5a84d617-0a85-4195-b05f-46ee52c1e295"). InnerVolumeSpecName "kube-api-access-4sx6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.708151 4705 generic.go:334] "Generic (PLEG): container finished" podID="5a84d617-0a85-4195-b05f-46ee52c1e295" containerID="fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c" exitCode=0 Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.708194 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" event={"ID":"5a84d617-0a85-4195-b05f-46ee52c1e295","Type":"ContainerDied","Data":"fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c"} Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.708240 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" event={"ID":"5a84d617-0a85-4195-b05f-46ee52c1e295","Type":"ContainerDied","Data":"f8075a3016c002ae5ef6757fe4d0239e3708b2d62bb0400fca90b4cb9ceab383"} Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.708233 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.708257 4705 scope.go:117] "RemoveContainer" containerID="fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.708923 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/5a84d617-0a85-4195-b05f-46ee52c1e295-kube-api-access-4sx6q\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.708958 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a84d617-0a85-4195-b05f-46ee52c1e295-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.726119 4705 scope.go:117] "RemoveContainer" containerID="fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c" Jan 24 07:47:45 crc kubenswrapper[4705]: E0124 07:47:45.726591 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c\": container with ID starting with fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c not found: ID does not exist" containerID="fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.726631 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c"} err="failed to get container status \"fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c\": rpc error: code = NotFound desc = could not find container \"fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c\": container with ID starting with fcf5e321ac3569cb84f09abb95440dc235c3ea1f34544d7b41c5c5e9155ff65c not found: ID does not exist" Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.736932 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4"] Jan 24 07:47:45 crc kubenswrapper[4705]: I0124 07:47:45.740464 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-lvnl4"] Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.245092 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g"] Jan 24 07:47:46 crc kubenswrapper[4705]: E0124 07:47:46.245346 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a84d617-0a85-4195-b05f-46ee52c1e295" containerName="route-controller-manager" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.245363 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a84d617-0a85-4195-b05f-46ee52c1e295" containerName="route-controller-manager" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.245517 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a84d617-0a85-4195-b05f-46ee52c1e295" containerName="route-controller-manager" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.246003 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.248229 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.252241 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.254033 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.254090 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.254123 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.254187 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.258928 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g"] Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.315192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3dd23acb-5d7b-4901-bd72-42b16e405fa6-client-ca\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.315238 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dd23acb-5d7b-4901-bd72-42b16e405fa6-config\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.315290 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2xf2\" (UniqueName: \"kubernetes.io/projected/3dd23acb-5d7b-4901-bd72-42b16e405fa6-kube-api-access-m2xf2\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.315408 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd23acb-5d7b-4901-bd72-42b16e405fa6-serving-cert\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.416309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3dd23acb-5d7b-4901-bd72-42b16e405fa6-client-ca\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.416592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dd23acb-5d7b-4901-bd72-42b16e405fa6-config\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.416680 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2xf2\" (UniqueName: \"kubernetes.io/projected/3dd23acb-5d7b-4901-bd72-42b16e405fa6-kube-api-access-m2xf2\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.416847 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd23acb-5d7b-4901-bd72-42b16e405fa6-serving-cert\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.417793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dd23acb-5d7b-4901-bd72-42b16e405fa6-config\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.418136 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3dd23acb-5d7b-4901-bd72-42b16e405fa6-client-ca\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.421468 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd23acb-5d7b-4901-bd72-42b16e405fa6-serving-cert\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.442016 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2xf2\" (UniqueName: \"kubernetes.io/projected/3dd23acb-5d7b-4901-bd72-42b16e405fa6-kube-api-access-m2xf2\") pod \"route-controller-manager-649446c59-4pb7g\" (UID: \"3dd23acb-5d7b-4901-bd72-42b16e405fa6\") " pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.573323 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:46 crc kubenswrapper[4705]: I0124 07:47:46.958535 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g"] Jan 24 07:47:46 crc kubenswrapper[4705]: W0124 07:47:46.968037 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd23acb_5d7b_4901_bd72_42b16e405fa6.slice/crio-ff3dac5046e9b5077e5eee05976d278613e6399818f5d09a95fa72ffd38e7555 WatchSource:0}: Error finding container ff3dac5046e9b5077e5eee05976d278613e6399818f5d09a95fa72ffd38e7555: Status 404 returned error can't find the container with id ff3dac5046e9b5077e5eee05976d278613e6399818f5d09a95fa72ffd38e7555 Jan 24 07:47:47 crc kubenswrapper[4705]: I0124 07:47:47.586969 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a84d617-0a85-4195-b05f-46ee52c1e295" path="/var/lib/kubelet/pods/5a84d617-0a85-4195-b05f-46ee52c1e295/volumes" Jan 24 07:47:47 crc kubenswrapper[4705]: I0124 07:47:47.722914 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" event={"ID":"3dd23acb-5d7b-4901-bd72-42b16e405fa6","Type":"ContainerStarted","Data":"9677816e849f4429d6292c75d1b79cf2b6a3fb287d2dbd0372f1579cff3b5dc1"} Jan 24 07:47:47 crc kubenswrapper[4705]: I0124 07:47:47.723201 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" event={"ID":"3dd23acb-5d7b-4901-bd72-42b16e405fa6","Type":"ContainerStarted","Data":"ff3dac5046e9b5077e5eee05976d278613e6399818f5d09a95fa72ffd38e7555"} Jan 24 07:47:47 crc kubenswrapper[4705]: I0124 07:47:47.723487 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:47 crc kubenswrapper[4705]: I0124 07:47:47.753271 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" podStartSLOduration=2.753225185 podStartE2EDuration="2.753225185s" podCreationTimestamp="2026-01-24 07:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:47:47.746669422 +0000 UTC m=+406.466542720" watchObservedRunningTime="2026-01-24 07:47:47.753225185 +0000 UTC m=+406.473098473" Jan 24 07:47:47 crc kubenswrapper[4705]: I0124 07:47:47.886430 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-649446c59-4pb7g" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.338986 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" podUID="e4e30be1-989b-4a5d-a33c-79c00184ce75" containerName="registry" containerID="cri-o://3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d" gracePeriod=30 Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.690619 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779231 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-bound-sa-token\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779326 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgpnn\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-kube-api-access-zgpnn\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779365 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-certificates\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779600 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779666 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e4e30be1-989b-4a5d-a33c-79c00184ce75-installation-pull-secrets\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e4e30be1-989b-4a5d-a33c-79c00184ce75-ca-trust-extracted\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779743 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-tls\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.779770 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-trusted-ca\") pod \"e4e30be1-989b-4a5d-a33c-79c00184ce75\" (UID: \"e4e30be1-989b-4a5d-a33c-79c00184ce75\") " Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.780933 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.781020 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.781783 4705 generic.go:334] "Generic (PLEG): container finished" podID="e4e30be1-989b-4a5d-a33c-79c00184ce75" containerID="3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d" exitCode=0 Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.781835 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" event={"ID":"e4e30be1-989b-4a5d-a33c-79c00184ce75","Type":"ContainerDied","Data":"3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d"} Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.781892 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" event={"ID":"e4e30be1-989b-4a5d-a33c-79c00184ce75","Type":"ContainerDied","Data":"b2c16382b1d0cb3034d5a2d9254ca76a11123cfed2b717f529fc0487af5ab5ea"} Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.781921 4705 scope.go:117] "RemoveContainer" containerID="3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.781952 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-m7gzc" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.786416 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.787590 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e30be1-989b-4a5d-a33c-79c00184ce75-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.788460 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-kube-api-access-zgpnn" (OuterVolumeSpecName: "kube-api-access-zgpnn") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "kube-api-access-zgpnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.789105 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.797191 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.801028 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e30be1-989b-4a5d-a33c-79c00184ce75-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e4e30be1-989b-4a5d-a33c-79c00184ce75" (UID: "e4e30be1-989b-4a5d-a33c-79c00184ce75"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.832645 4705 scope.go:117] "RemoveContainer" containerID="3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d" Jan 24 07:47:58 crc kubenswrapper[4705]: E0124 07:47:58.833305 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d\": container with ID starting with 3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d not found: ID does not exist" containerID="3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.833358 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d"} err="failed to get container status \"3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d\": rpc error: code = NotFound desc = could not find container \"3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d\": container with ID starting with 3d30c7ca445101f777fd4cd06988c3b07a2b0773a6b418e514da9c0a3bcb374d not found: ID does not exist" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.881913 4705 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.881953 4705 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e4e30be1-989b-4a5d-a33c-79c00184ce75-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.881963 4705 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e4e30be1-989b-4a5d-a33c-79c00184ce75-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.881973 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e30be1-989b-4a5d-a33c-79c00184ce75-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.881982 4705 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.881991 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:58 crc kubenswrapper[4705]: I0124 07:47:58.881998 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgpnn\" (UniqueName: \"kubernetes.io/projected/e4e30be1-989b-4a5d-a33c-79c00184ce75-kube-api-access-zgpnn\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:59 crc kubenswrapper[4705]: I0124 07:47:59.114096 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-m7gzc"] Jan 24 07:47:59 crc kubenswrapper[4705]: I0124 07:47:59.118975 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-m7gzc"] Jan 24 07:47:59 crc kubenswrapper[4705]: I0124 07:47:59.582729 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e30be1-989b-4a5d-a33c-79c00184ce75" path="/var/lib/kubelet/pods/e4e30be1-989b-4a5d-a33c-79c00184ce75/volumes" Jan 24 07:48:07 crc kubenswrapper[4705]: I0124 07:48:07.071217 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:48:07 crc kubenswrapper[4705]: I0124 07:48:07.072946 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:48:07 crc kubenswrapper[4705]: I0124 07:48:07.073238 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:48:07 crc kubenswrapper[4705]: I0124 07:48:07.073856 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55a789d696c5fbcd2185ba344cba09dd1eceead04a4521477c8a234b346679c0"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:48:07 crc kubenswrapper[4705]: I0124 07:48:07.073918 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://55a789d696c5fbcd2185ba344cba09dd1eceead04a4521477c8a234b346679c0" gracePeriod=600 Jan 24 07:48:08 crc kubenswrapper[4705]: I0124 07:48:08.008762 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="55a789d696c5fbcd2185ba344cba09dd1eceead04a4521477c8a234b346679c0" exitCode=0 Jan 24 07:48:08 crc kubenswrapper[4705]: I0124 07:48:08.008860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"55a789d696c5fbcd2185ba344cba09dd1eceead04a4521477c8a234b346679c0"} Jan 24 07:48:08 crc kubenswrapper[4705]: I0124 07:48:08.009150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"51ea1cc84746f000ef5c3cc6afc9d129b7d13ec7278b9c4c6bb3c9526467fa3a"} Jan 24 07:48:08 crc kubenswrapper[4705]: I0124 07:48:08.009184 4705 scope.go:117] "RemoveContainer" containerID="431cb8075f18f140b73d9e089cff43755e810217f0785819ba908696396884a8" Jan 24 07:50:07 crc kubenswrapper[4705]: I0124 07:50:07.071931 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:50:07 crc kubenswrapper[4705]: I0124 07:50:07.072419 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:50:37 crc kubenswrapper[4705]: I0124 07:50:37.071896 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:50:37 crc kubenswrapper[4705]: I0124 07:50:37.072425 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:51:07 crc kubenswrapper[4705]: I0124 07:51:07.071531 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:51:07 crc kubenswrapper[4705]: I0124 07:51:07.072702 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:51:07 crc kubenswrapper[4705]: I0124 07:51:07.072806 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:51:07 crc kubenswrapper[4705]: I0124 07:51:07.073263 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51ea1cc84746f000ef5c3cc6afc9d129b7d13ec7278b9c4c6bb3c9526467fa3a"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:51:07 crc kubenswrapper[4705]: I0124 07:51:07.073394 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://51ea1cc84746f000ef5c3cc6afc9d129b7d13ec7278b9c4c6bb3c9526467fa3a" gracePeriod=600 Jan 24 07:51:08 crc kubenswrapper[4705]: I0124 07:51:08.055094 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="51ea1cc84746f000ef5c3cc6afc9d129b7d13ec7278b9c4c6bb3c9526467fa3a" exitCode=0 Jan 24 07:51:08 crc kubenswrapper[4705]: I0124 07:51:08.055165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"51ea1cc84746f000ef5c3cc6afc9d129b7d13ec7278b9c4c6bb3c9526467fa3a"} Jan 24 07:51:08 crc kubenswrapper[4705]: I0124 07:51:08.055682 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"49dd615a3faac21183e353fd2b4521164856375addc3eb5e62c6cd13a36cfc96"} Jan 24 07:51:08 crc kubenswrapper[4705]: I0124 07:51:08.055708 4705 scope.go:117] "RemoveContainer" containerID="55a789d696c5fbcd2185ba344cba09dd1eceead04a4521477c8a234b346679c0" Jan 24 07:53:07 crc kubenswrapper[4705]: I0124 07:53:07.071846 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:53:07 crc kubenswrapper[4705]: I0124 07:53:07.072987 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:37 crc kubenswrapper[4705]: I0124 07:53:37.071491 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:53:37 crc kubenswrapper[4705]: I0124 07:53:37.072046 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.722023 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv"] Jan 24 07:53:43 crc kubenswrapper[4705]: E0124 07:53:43.722624 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e30be1-989b-4a5d-a33c-79c00184ce75" containerName="registry" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.722643 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e30be1-989b-4a5d-a33c-79c00184ce75" containerName="registry" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.722860 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e30be1-989b-4a5d-a33c-79c00184ce75" containerName="registry" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.723317 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.726434 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.734262 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.734301 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wr5wq" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.740093 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7d248"] Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.741049 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7d248" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.744149 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv"] Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.745333 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-q7nbh" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.750675 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dz4zc"] Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.755468 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.762905 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7d248"] Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.769578 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lhrlf" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.779971 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dz4zc"] Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.803677 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkhs\" (UniqueName: \"kubernetes.io/projected/0d0523a0-74f2-455b-be13-f2c764d4b4e3-kube-api-access-wfkhs\") pod \"cert-manager-webhook-687f57d79b-dz4zc\" (UID: \"0d0523a0-74f2-455b-be13-f2c764d4b4e3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.803743 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvfw\" (UniqueName: \"kubernetes.io/projected/b9303a69-3000-46da-a5eb-4c08989db796-kube-api-access-dgvfw\") pod \"cert-manager-858654f9db-7d248\" (UID: \"b9303a69-3000-46da-a5eb-4c08989db796\") " pod="cert-manager/cert-manager-858654f9db-7d248" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.803763 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t84k\" (UniqueName: \"kubernetes.io/projected/5fd12a13-2f0a-45ca-99d8-87e45f8f5743-kube-api-access-5t84k\") pod \"cert-manager-cainjector-cf98fcc89-xm7cv\" (UID: \"5fd12a13-2f0a-45ca-99d8-87e45f8f5743\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.904742 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfkhs\" (UniqueName: \"kubernetes.io/projected/0d0523a0-74f2-455b-be13-f2c764d4b4e3-kube-api-access-wfkhs\") pod \"cert-manager-webhook-687f57d79b-dz4zc\" (UID: \"0d0523a0-74f2-455b-be13-f2c764d4b4e3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.904814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgvfw\" (UniqueName: \"kubernetes.io/projected/b9303a69-3000-46da-a5eb-4c08989db796-kube-api-access-dgvfw\") pod \"cert-manager-858654f9db-7d248\" (UID: \"b9303a69-3000-46da-a5eb-4c08989db796\") " pod="cert-manager/cert-manager-858654f9db-7d248" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.904866 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t84k\" (UniqueName: \"kubernetes.io/projected/5fd12a13-2f0a-45ca-99d8-87e45f8f5743-kube-api-access-5t84k\") pod \"cert-manager-cainjector-cf98fcc89-xm7cv\" (UID: \"5fd12a13-2f0a-45ca-99d8-87e45f8f5743\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.926469 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfkhs\" (UniqueName: \"kubernetes.io/projected/0d0523a0-74f2-455b-be13-f2c764d4b4e3-kube-api-access-wfkhs\") pod \"cert-manager-webhook-687f57d79b-dz4zc\" (UID: \"0d0523a0-74f2-455b-be13-f2c764d4b4e3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.927584 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t84k\" (UniqueName: \"kubernetes.io/projected/5fd12a13-2f0a-45ca-99d8-87e45f8f5743-kube-api-access-5t84k\") pod \"cert-manager-cainjector-cf98fcc89-xm7cv\" (UID: \"5fd12a13-2f0a-45ca-99d8-87e45f8f5743\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" Jan 24 07:53:43 crc kubenswrapper[4705]: I0124 07:53:43.927802 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgvfw\" (UniqueName: \"kubernetes.io/projected/b9303a69-3000-46da-a5eb-4c08989db796-kube-api-access-dgvfw\") pod \"cert-manager-858654f9db-7d248\" (UID: \"b9303a69-3000-46da-a5eb-4c08989db796\") " pod="cert-manager/cert-manager-858654f9db-7d248" Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.044303 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.085332 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7d248" Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.092856 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.296203 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv"] Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.305044 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.392321 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dz4zc"] Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.535809 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7d248"] Jan 24 07:53:44 crc kubenswrapper[4705]: W0124 07:53:44.540506 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9303a69_3000_46da_a5eb_4c08989db796.slice/crio-fc4b16ce5cf698eb09cbad12a0ca66454d892bdcc4da9689f507be1c5b5dae8b WatchSource:0}: Error finding container fc4b16ce5cf698eb09cbad12a0ca66454d892bdcc4da9689f507be1c5b5dae8b: Status 404 returned error can't find the container with id fc4b16ce5cf698eb09cbad12a0ca66454d892bdcc4da9689f507be1c5b5dae8b Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.966132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7d248" event={"ID":"b9303a69-3000-46da-a5eb-4c08989db796","Type":"ContainerStarted","Data":"fc4b16ce5cf698eb09cbad12a0ca66454d892bdcc4da9689f507be1c5b5dae8b"} Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.967665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" event={"ID":"5fd12a13-2f0a-45ca-99d8-87e45f8f5743","Type":"ContainerStarted","Data":"b2bf66e38f8c010e66f20aaef3df8defcdbe87c25d5c1e893e700869e27109a6"} Jan 24 07:53:44 crc kubenswrapper[4705]: I0124 07:53:44.968601 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" event={"ID":"0d0523a0-74f2-455b-be13-f2c764d4b4e3","Type":"ContainerStarted","Data":"fda63560313f62a602edd70a9b8626246d08ad09e3e55b93fd34d7c3b2b67ad8"} Jan 24 07:53:46 crc kubenswrapper[4705]: I0124 07:53:46.983261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" event={"ID":"5fd12a13-2f0a-45ca-99d8-87e45f8f5743","Type":"ContainerStarted","Data":"3b4b21025e035175631c02d28303d2fcd67de8808d8257e6f8ac3fec86217fb0"} Jan 24 07:53:47 crc kubenswrapper[4705]: I0124 07:53:47.008227 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xm7cv" podStartSLOduration=1.816234602 podStartE2EDuration="4.008199423s" podCreationTimestamp="2026-01-24 07:53:43 +0000 UTC" firstStartedPulling="2026-01-24 07:53:44.30480428 +0000 UTC m=+763.024677578" lastFinishedPulling="2026-01-24 07:53:46.496769111 +0000 UTC m=+765.216642399" observedRunningTime="2026-01-24 07:53:46.998869148 +0000 UTC m=+765.718742476" watchObservedRunningTime="2026-01-24 07:53:47.008199423 +0000 UTC m=+765.728072711" Jan 24 07:53:49 crc kubenswrapper[4705]: I0124 07:53:49.008755 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" event={"ID":"0d0523a0-74f2-455b-be13-f2c764d4b4e3","Type":"ContainerStarted","Data":"d76c07e55d277f3cbb9833ec6c2004911b547a258055ed412ba22c0431eeb90f"} Jan 24 07:53:49 crc kubenswrapper[4705]: I0124 07:53:49.009366 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" Jan 24 07:53:49 crc kubenswrapper[4705]: I0124 07:53:49.011645 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7d248" event={"ID":"b9303a69-3000-46da-a5eb-4c08989db796","Type":"ContainerStarted","Data":"6b4f9e3b7c9b4e8b2e4c0f17e81a06abe5fd30dfc56b616f09d51691d584d262"} Jan 24 07:53:49 crc kubenswrapper[4705]: I0124 07:53:49.029168 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" podStartSLOduration=2.153840529 podStartE2EDuration="6.029150358s" podCreationTimestamp="2026-01-24 07:53:43 +0000 UTC" firstStartedPulling="2026-01-24 07:53:44.39370371 +0000 UTC m=+763.113576998" lastFinishedPulling="2026-01-24 07:53:48.269013539 +0000 UTC m=+766.988886827" observedRunningTime="2026-01-24 07:53:49.024813365 +0000 UTC m=+767.744686653" watchObservedRunningTime="2026-01-24 07:53:49.029150358 +0000 UTC m=+767.749023646" Jan 24 07:53:49 crc kubenswrapper[4705]: I0124 07:53:49.041520 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7d248" podStartSLOduration=2.268113336 podStartE2EDuration="6.041503818s" podCreationTimestamp="2026-01-24 07:53:43 +0000 UTC" firstStartedPulling="2026-01-24 07:53:44.542688091 +0000 UTC m=+763.262561379" lastFinishedPulling="2026-01-24 07:53:48.316078573 +0000 UTC m=+767.035951861" observedRunningTime="2026-01-24 07:53:49.039985485 +0000 UTC m=+767.759858763" watchObservedRunningTime="2026-01-24 07:53:49.041503818 +0000 UTC m=+767.761377106" Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.523010 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-js42b"] Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.523777 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-controller" containerID="cri-o://eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5" gracePeriod=30 Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.523867 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2" gracePeriod=30 Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.523864 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="nbdb" containerID="cri-o://16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b" gracePeriod=30 Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.523981 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-node" containerID="cri-o://48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b" gracePeriod=30 Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.524019 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="northd" containerID="cri-o://4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426" gracePeriod=30 Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.524043 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-acl-logging" containerID="cri-o://a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf" gracePeriod=30 Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.524202 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="sbdb" containerID="cri-o://24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2" gracePeriod=30 Jan 24 07:53:52 crc kubenswrapper[4705]: I0124 07:53:52.552306 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" containerID="cri-o://a301faf371be2148d30d37d8a5614e3aac18be4bddb9bc53afcbfe021b2c1372" gracePeriod=30 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.031812 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/2.log" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.032721 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/1.log" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.032759 4705 generic.go:334] "Generic (PLEG): container finished" podID="5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd" containerID="05df28d77c528db547c77fa6f2e2e34112f9d66c1b57c7bc41c7319cbd191449" exitCode=2 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.032808 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerDied","Data":"05df28d77c528db547c77fa6f2e2e34112f9d66c1b57c7bc41c7319cbd191449"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.032870 4705 scope.go:117] "RemoveContainer" containerID="5a5e5acaef986257b20ca3ea8d8222ce26d04290d2ba0562be112581ae9f12c8" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.033332 4705 scope.go:117] "RemoveContainer" containerID="05df28d77c528db547c77fa6f2e2e34112f9d66c1b57c7bc41c7319cbd191449" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.042810 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovnkube-controller/3.log" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.058650 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovn-acl-logging/0.log" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.063829 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovn-controller/0.log" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064299 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="a301faf371be2148d30d37d8a5614e3aac18be4bddb9bc53afcbfe021b2c1372" exitCode=0 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064328 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2" exitCode=0 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064341 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b" exitCode=0 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064352 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426" exitCode=0 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064366 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2" exitCode=0 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064375 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b" exitCode=0 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064383 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf" exitCode=143 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064393 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerID="eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5" exitCode=143 Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"a301faf371be2148d30d37d8a5614e3aac18be4bddb9bc53afcbfe021b2c1372"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064425 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064445 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064458 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064469 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064480 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064488 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.064497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5"} Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.089344 4705 scope.go:117] "RemoveContainer" containerID="089fb2079fb32f3d46cf6a91bfc27a6fc07d67169f8b0dec889ef15a0433f124" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.292807 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovn-acl-logging/0.log" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.293208 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovn-controller/0.log" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.293542 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.356726 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d59sv"] Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.356997 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357015 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357025 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="nbdb" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357032 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="nbdb" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357044 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357051 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357064 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357071 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357080 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kubecfg-setup" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357087 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kubecfg-setup" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357098 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-node" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357105 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-node" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357115 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357122 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357135 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="sbdb" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357142 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="sbdb" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357153 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-acl-logging" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357160 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-acl-logging" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357168 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357174 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357184 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="northd" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357193 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="northd" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357304 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357316 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="northd" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357325 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357333 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="kube-rbac-proxy-node" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357341 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357348 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovn-acl-logging" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357355 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="sbdb" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357363 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357373 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="nbdb" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357379 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357385 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357498 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357505 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: E0124 07:53:53.357513 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357519 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.357611 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" containerName="ovnkube-controller" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.359220 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430474 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430364 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-etc-openvswitch\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430569 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-kubelet\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430648 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430741 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-script-lib\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430794 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-ovn\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430812 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-openvswitch\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430868 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-config\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430896 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-slash\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430918 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-ovn-kubernetes\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430947 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-systemd-units\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430937 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430968 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-env-overrides\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430985 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-netd\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430994 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.430999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-bin\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431004 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-slash" (OuterVolumeSpecName: "host-slash") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431039 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431041 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431056 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431077 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431117 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-log-socket\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431158 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-var-lib-openvswitch\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovn-node-metrics-cert\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431240 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-netns\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431256 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-node-log\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431265 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-log-socket" (OuterVolumeSpecName: "log-socket") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431276 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcjx8\" (UniqueName: \"kubernetes.io/projected/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-kube-api-access-bcjx8\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431293 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-systemd\") pod \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\" (UID: \"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4\") " Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431304 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431310 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431336 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431339 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-node-log" (OuterVolumeSpecName: "node-log") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431348 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431357 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431475 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-etc-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-env-overrides\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431636 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p78j6\" (UniqueName: \"kubernetes.io/projected/6257ac48-8583-40ba-8a64-1dd0375653c6-kube-api-access-p78j6\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431686 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-kubelet\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-systemd\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431737 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-node-log\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-ovn\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431834 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-ovnkube-config\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431879 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-var-lib-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-log-socket\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-run-ovn-kubernetes\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431962 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.431995 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6257ac48-8583-40ba-8a64-1dd0375653c6-ovn-node-metrics-cert\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432025 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-systemd-units\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432046 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-ovnkube-script-lib\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432063 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-slash\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432084 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-run-netns\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432117 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432143 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432205 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-cni-netd\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432249 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-cni-bin\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432326 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432338 4705 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-slash\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432349 4705 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432359 4705 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432368 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432376 4705 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432384 4705 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432394 4705 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432403 4705 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-log-socket\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432412 4705 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432421 4705 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432429 4705 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-node-log\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432437 4705 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432444 4705 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432455 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432466 4705 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.432474 4705 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.436541 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.436611 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-kube-api-access-bcjx8" (OuterVolumeSpecName: "kube-api-access-bcjx8") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "kube-api-access-bcjx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.444438 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" (UID: "3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-cni-netd\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533667 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-cni-bin\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533698 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-etc-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533718 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-env-overrides\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-cni-netd\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533785 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-cni-bin\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533858 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-etc-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533742 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p78j6\" (UniqueName: \"kubernetes.io/projected/6257ac48-8583-40ba-8a64-1dd0375653c6-kube-api-access-p78j6\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533920 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-kubelet\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533945 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-systemd\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-node-log\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.533982 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-ovn\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534010 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-ovnkube-config\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-var-lib-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-var-lib-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534086 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-ovn\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534102 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-systemd\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-kubelet\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-node-log\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-log-socket\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534213 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-run-ovn-kubernetes\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534220 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-log-socket\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-run-ovn-kubernetes\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534287 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6257ac48-8583-40ba-8a64-1dd0375653c6-ovn-node-metrics-cert\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534387 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-systemd-units\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534442 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-ovnkube-script-lib\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534477 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-slash\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534516 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-run-netns\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534536 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534547 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-env-overrides\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534610 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-ovnkube-config\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534630 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-run-openvswitch\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534636 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-slash\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-run-netns\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534633 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534666 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6257ac48-8583-40ba-8a64-1dd0375653c6-systemd-units\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534693 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534705 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcjx8\" (UniqueName: \"kubernetes.io/projected/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-kube-api-access-bcjx8\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.534715 4705 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.535077 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6257ac48-8583-40ba-8a64-1dd0375653c6-ovnkube-script-lib\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.537164 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6257ac48-8583-40ba-8a64-1dd0375653c6-ovn-node-metrics-cert\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.553929 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p78j6\" (UniqueName: \"kubernetes.io/projected/6257ac48-8583-40ba-8a64-1dd0375653c6-kube-api-access-p78j6\") pod \"ovnkube-node-d59sv\" (UID: \"6257ac48-8583-40ba-8a64-1dd0375653c6\") " pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:53 crc kubenswrapper[4705]: I0124 07:53:53.690632 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.071330 4705 generic.go:334] "Generic (PLEG): container finished" podID="6257ac48-8583-40ba-8a64-1dd0375653c6" containerID="4db09f1a8e39e4dad1ee4b0c87117ad3b13c37e590be2c9837e387d2e14c6dc3" exitCode=0 Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.071412 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerDied","Data":"4db09f1a8e39e4dad1ee4b0c87117ad3b13c37e590be2c9837e387d2e14c6dc3"} Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.071452 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"bd7646920c2b3444699d57846d58eb16a7c9ff45ed2a8eab4cd47bea97209c27"} Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.079478 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovn-acl-logging/0.log" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.080045 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-js42b_3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/ovn-controller/0.log" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.080455 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" event={"ID":"3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4","Type":"ContainerDied","Data":"5c6f6a8f48c413f8fdefb764ad0bcad429c0095b81cd62a9ca17f7ff1ddb8416"} Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.080513 4705 scope.go:117] "RemoveContainer" containerID="a301faf371be2148d30d37d8a5614e3aac18be4bddb9bc53afcbfe021b2c1372" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.080566 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js42b" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.083072 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9wbv_5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd/kube-multus/2.log" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.083136 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9wbv" event={"ID":"5d7cddd8-1e3d-4cb1-a031-08b2dad2e3dd","Type":"ContainerStarted","Data":"93c5d89994391fae4f44c89107c3ce72c48cdccf6ab8cab8b744e2970d532cbd"} Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.096550 4705 scope.go:117] "RemoveContainer" containerID="24ed79ce5c99356cffedc3495cea6913872713ef87f95b6d8afd8888c16279b2" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.098587 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dz4zc" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.132266 4705 scope.go:117] "RemoveContainer" containerID="16dcb70df634bc2e228dded1700c3c3a721b00d7f4c80bdc9da378eb2a56bc0b" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.167984 4705 scope.go:117] "RemoveContainer" containerID="4fb273988187a21be8aa812e731447174f0817f74960e7c7458984e27a500426" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.179731 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-js42b"] Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.188187 4705 scope.go:117] "RemoveContainer" containerID="d82dfc8159c0d2f5fb7495599f4afb6ff4ad9ea27dc743433cce88a3d38bd9d2" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.188643 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-js42b"] Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.200372 4705 scope.go:117] "RemoveContainer" containerID="48e78ae9c2cd17702f90459328735554b818abbdc4c5d3db3e87b04e920b446b" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.211799 4705 scope.go:117] "RemoveContainer" containerID="a0d32bab0dbd36e12c58a0563a3715e86b7aa8d7b2cb439a963737fbbb5e99bf" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.223401 4705 scope.go:117] "RemoveContainer" containerID="eb92aadb3229525b480dbcc9f7dbc64a5666e9563d21deda0954a558123687d5" Jan 24 07:53:54 crc kubenswrapper[4705]: I0124 07:53:54.243500 4705 scope.go:117] "RemoveContainer" containerID="61c0096af2aa79e7dc77927658d6b7fc10cb121cf4e30475c8e261cc9cc7c2f9" Jan 24 07:53:55 crc kubenswrapper[4705]: I0124 07:53:55.093658 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"574a8b0b50df9394653499ce067ce0d9bee6a52fdf197aa47dda194135ea5e8e"} Jan 24 07:53:55 crc kubenswrapper[4705]: I0124 07:53:55.093962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"bdaeafa1934e4b2de1f5acf8908308c66b3ba0a14b568a97353423237b8c3b76"} Jan 24 07:53:55 crc kubenswrapper[4705]: I0124 07:53:55.093977 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"81b5c4acf43713e3f937d27fb603f071dff563f20e5ad3de84709829b8d8b4a3"} Jan 24 07:53:55 crc kubenswrapper[4705]: I0124 07:53:55.093988 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"ff7a1db181d09d5ecc7bfb0999047c1b302edbb4c34cc1f8d4931977d2b0113d"} Jan 24 07:53:55 crc kubenswrapper[4705]: I0124 07:53:55.094000 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"aa471c6f409ca00597ec25047bbbd47140e3d972479742876a084ee0023fd4f5"} Jan 24 07:53:55 crc kubenswrapper[4705]: I0124 07:53:55.094009 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"d89b74857a7257f5d7a1e3833896569b8d5d357b7564d71e7c2a0d38529c6c6c"} Jan 24 07:53:55 crc kubenswrapper[4705]: I0124 07:53:55.582298 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4" path="/var/lib/kubelet/pods/3c5ab7a3-a576-4878-9b73-c3a4ac6d69a4/volumes" Jan 24 07:53:58 crc kubenswrapper[4705]: I0124 07:53:58.118091 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"73444de71b51af0dcdaad6b9bd324e08f84812658e166e86c1ee4cb096d31626"} Jan 24 07:54:01 crc kubenswrapper[4705]: I0124 07:54:01.140565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" event={"ID":"6257ac48-8583-40ba-8a64-1dd0375653c6","Type":"ContainerStarted","Data":"e85d5cb1dce4d82342e8560e41f3f9f43c730bc6b6865e7e3c2f766bdbda5395"} Jan 24 07:54:01 crc kubenswrapper[4705]: I0124 07:54:01.140967 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:54:01 crc kubenswrapper[4705]: I0124 07:54:01.141513 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:54:01 crc kubenswrapper[4705]: I0124 07:54:01.173361 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:54:01 crc kubenswrapper[4705]: I0124 07:54:01.188474 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" podStartSLOduration=8.188454666 podStartE2EDuration="8.188454666s" podCreationTimestamp="2026-01-24 07:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:54:01.187212251 +0000 UTC m=+779.907085539" watchObservedRunningTime="2026-01-24 07:54:01.188454666 +0000 UTC m=+779.908327954" Jan 24 07:54:01 crc kubenswrapper[4705]: I0124 07:54:01.686853 4705 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 07:54:02 crc kubenswrapper[4705]: I0124 07:54:02.144731 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:54:02 crc kubenswrapper[4705]: I0124 07:54:02.171784 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:54:07 crc kubenswrapper[4705]: I0124 07:54:07.071310 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:54:07 crc kubenswrapper[4705]: I0124 07:54:07.071886 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:54:07 crc kubenswrapper[4705]: I0124 07:54:07.071959 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:54:07 crc kubenswrapper[4705]: I0124 07:54:07.072653 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49dd615a3faac21183e353fd2b4521164856375addc3eb5e62c6cd13a36cfc96"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:54:07 crc kubenswrapper[4705]: I0124 07:54:07.072716 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://49dd615a3faac21183e353fd2b4521164856375addc3eb5e62c6cd13a36cfc96" gracePeriod=600 Jan 24 07:54:08 crc kubenswrapper[4705]: I0124 07:54:08.184363 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="49dd615a3faac21183e353fd2b4521164856375addc3eb5e62c6cd13a36cfc96" exitCode=0 Jan 24 07:54:08 crc kubenswrapper[4705]: I0124 07:54:08.184449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"49dd615a3faac21183e353fd2b4521164856375addc3eb5e62c6cd13a36cfc96"} Jan 24 07:54:08 crc kubenswrapper[4705]: I0124 07:54:08.184701 4705 scope.go:117] "RemoveContainer" containerID="51ea1cc84746f000ef5c3cc6afc9d129b7d13ec7278b9c4c6bb3c9526467fa3a" Jan 24 07:54:09 crc kubenswrapper[4705]: I0124 07:54:09.207575 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"4157e341864c82a6048118572db8f63cf29b32c144fe523434dcc318955e439e"} Jan 24 07:54:23 crc kubenswrapper[4705]: I0124 07:54:23.717576 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d59sv" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.645962 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f"] Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.647357 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.649988 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.660048 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f"] Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.706136 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.706203 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.706226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqw8\" (UniqueName: \"kubernetes.io/projected/c803c260-b4e8-4052-9f31-174e7abed57d-kube-api-access-9zqw8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.807214 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.807269 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zqw8\" (UniqueName: \"kubernetes.io/projected/c803c260-b4e8-4052-9f31-174e7abed57d-kube-api-access-9zqw8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.807331 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.807791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.808206 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.827473 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zqw8\" (UniqueName: \"kubernetes.io/projected/c803c260-b4e8-4052-9f31-174e7abed57d-kube-api-access-9zqw8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:32 crc kubenswrapper[4705]: I0124 07:54:32.965209 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:33 crc kubenswrapper[4705]: I0124 07:54:33.209221 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f"] Jan 24 07:54:33 crc kubenswrapper[4705]: I0124 07:54:33.433101 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" event={"ID":"c803c260-b4e8-4052-9f31-174e7abed57d","Type":"ContainerStarted","Data":"d7799afd7d6ca958ab73a5dd791463b7329c51cdca3e0d58af38949b0cf3a69e"} Jan 24 07:54:33 crc kubenswrapper[4705]: I0124 07:54:33.433401 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" event={"ID":"c803c260-b4e8-4052-9f31-174e7abed57d","Type":"ContainerStarted","Data":"f5bae7b70885a80e98ddcc3e7e365d605e07c586b37fff4e3e43abad48d24cfa"} Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.444456 4705 generic.go:334] "Generic (PLEG): container finished" podID="c803c260-b4e8-4052-9f31-174e7abed57d" containerID="d7799afd7d6ca958ab73a5dd791463b7329c51cdca3e0d58af38949b0cf3a69e" exitCode=0 Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.444519 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" event={"ID":"c803c260-b4e8-4052-9f31-174e7abed57d","Type":"ContainerDied","Data":"d7799afd7d6ca958ab73a5dd791463b7329c51cdca3e0d58af38949b0cf3a69e"} Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.657676 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xdkwg"] Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.660402 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.671316 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdkwg"] Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.731427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdws\" (UniqueName: \"kubernetes.io/projected/bce9e327-048f-4606-bab7-8600a88f13d6-kube-api-access-kqdws\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.731495 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-utilities\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.731521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-catalog-content\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.832712 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdws\" (UniqueName: \"kubernetes.io/projected/bce9e327-048f-4606-bab7-8600a88f13d6-kube-api-access-kqdws\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.832772 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-utilities\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.832806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-catalog-content\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.833258 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-catalog-content\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.833413 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-utilities\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.863788 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdws\" (UniqueName: \"kubernetes.io/projected/bce9e327-048f-4606-bab7-8600a88f13d6-kube-api-access-kqdws\") pod \"redhat-operators-xdkwg\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:34 crc kubenswrapper[4705]: I0124 07:54:34.990345 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:35 crc kubenswrapper[4705]: I0124 07:54:35.190772 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdkwg"] Jan 24 07:54:35 crc kubenswrapper[4705]: W0124 07:54:35.198044 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbce9e327_048f_4606_bab7_8600a88f13d6.slice/crio-4c4a057fc5c240259c16c6456e412f6b7b95b89240f57c998da6f0b84ecc8181 WatchSource:0}: Error finding container 4c4a057fc5c240259c16c6456e412f6b7b95b89240f57c998da6f0b84ecc8181: Status 404 returned error can't find the container with id 4c4a057fc5c240259c16c6456e412f6b7b95b89240f57c998da6f0b84ecc8181 Jan 24 07:54:35 crc kubenswrapper[4705]: I0124 07:54:35.452423 4705 generic.go:334] "Generic (PLEG): container finished" podID="bce9e327-048f-4606-bab7-8600a88f13d6" containerID="9bbdffa282a6e0457123463588ca53c5c736c30f8dbd7cfbb1e7d3e6cfaf2d92" exitCode=0 Jan 24 07:54:35 crc kubenswrapper[4705]: I0124 07:54:35.452507 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdkwg" event={"ID":"bce9e327-048f-4606-bab7-8600a88f13d6","Type":"ContainerDied","Data":"9bbdffa282a6e0457123463588ca53c5c736c30f8dbd7cfbb1e7d3e6cfaf2d92"} Jan 24 07:54:35 crc kubenswrapper[4705]: I0124 07:54:35.452756 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdkwg" event={"ID":"bce9e327-048f-4606-bab7-8600a88f13d6","Type":"ContainerStarted","Data":"4c4a057fc5c240259c16c6456e412f6b7b95b89240f57c998da6f0b84ecc8181"} Jan 24 07:54:36 crc kubenswrapper[4705]: I0124 07:54:36.461926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdkwg" event={"ID":"bce9e327-048f-4606-bab7-8600a88f13d6","Type":"ContainerStarted","Data":"85446b674a808a6e7ebada5f5a402240e4a1df7eed0db97d8339f95a745d3f72"} Jan 24 07:54:36 crc kubenswrapper[4705]: I0124 07:54:36.463983 4705 generic.go:334] "Generic (PLEG): container finished" podID="c803c260-b4e8-4052-9f31-174e7abed57d" containerID="a4f0df4fff5d3a02fbfafd3b2cabad6ac2645bac42f5393135ad18b1a4d024a1" exitCode=0 Jan 24 07:54:36 crc kubenswrapper[4705]: I0124 07:54:36.464021 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" event={"ID":"c803c260-b4e8-4052-9f31-174e7abed57d","Type":"ContainerDied","Data":"a4f0df4fff5d3a02fbfafd3b2cabad6ac2645bac42f5393135ad18b1a4d024a1"} Jan 24 07:54:37 crc kubenswrapper[4705]: I0124 07:54:37.470972 4705 generic.go:334] "Generic (PLEG): container finished" podID="c803c260-b4e8-4052-9f31-174e7abed57d" containerID="1e498d14bc7266ae2b5896764146c3aa65a1dd3050ecd5b304c4d48a2eda9d87" exitCode=0 Jan 24 07:54:37 crc kubenswrapper[4705]: I0124 07:54:37.471079 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" event={"ID":"c803c260-b4e8-4052-9f31-174e7abed57d","Type":"ContainerDied","Data":"1e498d14bc7266ae2b5896764146c3aa65a1dd3050ecd5b304c4d48a2eda9d87"} Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.479599 4705 generic.go:334] "Generic (PLEG): container finished" podID="bce9e327-048f-4606-bab7-8600a88f13d6" containerID="85446b674a808a6e7ebada5f5a402240e4a1df7eed0db97d8339f95a745d3f72" exitCode=0 Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.479683 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdkwg" event={"ID":"bce9e327-048f-4606-bab7-8600a88f13d6","Type":"ContainerDied","Data":"85446b674a808a6e7ebada5f5a402240e4a1df7eed0db97d8339f95a745d3f72"} Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.695634 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.824005 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-bundle\") pod \"c803c260-b4e8-4052-9f31-174e7abed57d\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.824144 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zqw8\" (UniqueName: \"kubernetes.io/projected/c803c260-b4e8-4052-9f31-174e7abed57d-kube-api-access-9zqw8\") pod \"c803c260-b4e8-4052-9f31-174e7abed57d\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.824172 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-util\") pod \"c803c260-b4e8-4052-9f31-174e7abed57d\" (UID: \"c803c260-b4e8-4052-9f31-174e7abed57d\") " Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.824589 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-bundle" (OuterVolumeSpecName: "bundle") pod "c803c260-b4e8-4052-9f31-174e7abed57d" (UID: "c803c260-b4e8-4052-9f31-174e7abed57d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.829025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c803c260-b4e8-4052-9f31-174e7abed57d-kube-api-access-9zqw8" (OuterVolumeSpecName: "kube-api-access-9zqw8") pod "c803c260-b4e8-4052-9f31-174e7abed57d" (UID: "c803c260-b4e8-4052-9f31-174e7abed57d"). InnerVolumeSpecName "kube-api-access-9zqw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.926023 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zqw8\" (UniqueName: \"kubernetes.io/projected/c803c260-b4e8-4052-9f31-174e7abed57d-kube-api-access-9zqw8\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:38 crc kubenswrapper[4705]: I0124 07:54:38.926068 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:39 crc kubenswrapper[4705]: I0124 07:54:39.110864 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-util" (OuterVolumeSpecName: "util") pod "c803c260-b4e8-4052-9f31-174e7abed57d" (UID: "c803c260-b4e8-4052-9f31-174e7abed57d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:39 crc kubenswrapper[4705]: I0124 07:54:39.128218 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c803c260-b4e8-4052-9f31-174e7abed57d-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:39 crc kubenswrapper[4705]: I0124 07:54:39.488255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdkwg" event={"ID":"bce9e327-048f-4606-bab7-8600a88f13d6","Type":"ContainerStarted","Data":"4515ce4f8ba4958c396d5f0a39ca3b5686faf856770cdcaedc0e2c272c5de65b"} Jan 24 07:54:39 crc kubenswrapper[4705]: I0124 07:54:39.490522 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" event={"ID":"c803c260-b4e8-4052-9f31-174e7abed57d","Type":"ContainerDied","Data":"f5bae7b70885a80e98ddcc3e7e365d605e07c586b37fff4e3e43abad48d24cfa"} Jan 24 07:54:39 crc kubenswrapper[4705]: I0124 07:54:39.490571 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5bae7b70885a80e98ddcc3e7e365d605e07c586b37fff4e3e43abad48d24cfa" Jan 24 07:54:39 crc kubenswrapper[4705]: I0124 07:54:39.490576 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f" Jan 24 07:54:39 crc kubenswrapper[4705]: I0124 07:54:39.512166 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xdkwg" podStartSLOduration=1.835373055 podStartE2EDuration="5.512145275s" podCreationTimestamp="2026-01-24 07:54:34 +0000 UTC" firstStartedPulling="2026-01-24 07:54:35.45930038 +0000 UTC m=+814.179173668" lastFinishedPulling="2026-01-24 07:54:39.1360726 +0000 UTC m=+817.855945888" observedRunningTime="2026-01-24 07:54:39.505356441 +0000 UTC m=+818.225229759" watchObservedRunningTime="2026-01-24 07:54:39.512145275 +0000 UTC m=+818.232018573" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.663149 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hn9tt"] Jan 24 07:54:43 crc kubenswrapper[4705]: E0124 07:54:43.664410 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c803c260-b4e8-4052-9f31-174e7abed57d" containerName="util" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.664477 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c803c260-b4e8-4052-9f31-174e7abed57d" containerName="util" Jan 24 07:54:43 crc kubenswrapper[4705]: E0124 07:54:43.664541 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c803c260-b4e8-4052-9f31-174e7abed57d" containerName="extract" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.664595 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c803c260-b4e8-4052-9f31-174e7abed57d" containerName="extract" Jan 24 07:54:43 crc kubenswrapper[4705]: E0124 07:54:43.664652 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c803c260-b4e8-4052-9f31-174e7abed57d" containerName="pull" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.664699 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c803c260-b4e8-4052-9f31-174e7abed57d" containerName="pull" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.664864 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c803c260-b4e8-4052-9f31-174e7abed57d" containerName="extract" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.665325 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.669060 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.669234 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vvzd9" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.669344 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.680487 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hn9tt"] Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.682548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbsr\" (UniqueName: \"kubernetes.io/projected/0bf1a582-10a6-4207-a953-16b7751ea5ef-kube-api-access-skbsr\") pod \"nmstate-operator-646758c888-hn9tt\" (UID: \"0bf1a582-10a6-4207-a953-16b7751ea5ef\") " pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.783633 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skbsr\" (UniqueName: \"kubernetes.io/projected/0bf1a582-10a6-4207-a953-16b7751ea5ef-kube-api-access-skbsr\") pod \"nmstate-operator-646758c888-hn9tt\" (UID: \"0bf1a582-10a6-4207-a953-16b7751ea5ef\") " pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.813837 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skbsr\" (UniqueName: \"kubernetes.io/projected/0bf1a582-10a6-4207-a953-16b7751ea5ef-kube-api-access-skbsr\") pod \"nmstate-operator-646758c888-hn9tt\" (UID: \"0bf1a582-10a6-4207-a953-16b7751ea5ef\") " pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" Jan 24 07:54:43 crc kubenswrapper[4705]: I0124 07:54:43.981862 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" Jan 24 07:54:44 crc kubenswrapper[4705]: I0124 07:54:44.358101 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hn9tt"] Jan 24 07:54:44 crc kubenswrapper[4705]: I0124 07:54:44.514758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" event={"ID":"0bf1a582-10a6-4207-a953-16b7751ea5ef","Type":"ContainerStarted","Data":"5a74252158cbee19e3fb1a1822d3b3e1145d239a9ae994f2336a3ffb54452072"} Jan 24 07:54:44 crc kubenswrapper[4705]: I0124 07:54:44.990982 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:44 crc kubenswrapper[4705]: I0124 07:54:44.991034 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:45 crc kubenswrapper[4705]: I0124 07:54:45.029749 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:45 crc kubenswrapper[4705]: I0124 07:54:45.561070 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:46 crc kubenswrapper[4705]: I0124 07:54:46.526303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" event={"ID":"0bf1a582-10a6-4207-a953-16b7751ea5ef","Type":"ContainerStarted","Data":"54ffe913cf311584027c32dc089db112a19ca59403e6f2656bff25e5bb4654fa"} Jan 24 07:54:46 crc kubenswrapper[4705]: I0124 07:54:46.541712 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-hn9tt" podStartSLOduration=1.743496765 podStartE2EDuration="3.54169211s" podCreationTimestamp="2026-01-24 07:54:43 +0000 UTC" firstStartedPulling="2026-01-24 07:54:44.369342574 +0000 UTC m=+823.089215862" lastFinishedPulling="2026-01-24 07:54:46.167537919 +0000 UTC m=+824.887411207" observedRunningTime="2026-01-24 07:54:46.540416083 +0000 UTC m=+825.260289381" watchObservedRunningTime="2026-01-24 07:54:46.54169211 +0000 UTC m=+825.261565418" Jan 24 07:54:47 crc kubenswrapper[4705]: I0124 07:54:47.647344 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdkwg"] Jan 24 07:54:47 crc kubenswrapper[4705]: I0124 07:54:47.647879 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xdkwg" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="registry-server" containerID="cri-o://4515ce4f8ba4958c396d5f0a39ca3b5686faf856770cdcaedc0e2c272c5de65b" gracePeriod=2 Jan 24 07:54:48 crc kubenswrapper[4705]: I0124 07:54:48.540115 4705 generic.go:334] "Generic (PLEG): container finished" podID="bce9e327-048f-4606-bab7-8600a88f13d6" containerID="4515ce4f8ba4958c396d5f0a39ca3b5686faf856770cdcaedc0e2c272c5de65b" exitCode=0 Jan 24 07:54:48 crc kubenswrapper[4705]: I0124 07:54:48.540203 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdkwg" event={"ID":"bce9e327-048f-4606-bab7-8600a88f13d6","Type":"ContainerDied","Data":"4515ce4f8ba4958c396d5f0a39ca3b5686faf856770cdcaedc0e2c272c5de65b"} Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.116899 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.217784 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-catalog-content\") pod \"bce9e327-048f-4606-bab7-8600a88f13d6\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.217955 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqdws\" (UniqueName: \"kubernetes.io/projected/bce9e327-048f-4606-bab7-8600a88f13d6-kube-api-access-kqdws\") pod \"bce9e327-048f-4606-bab7-8600a88f13d6\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.217993 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-utilities\") pod \"bce9e327-048f-4606-bab7-8600a88f13d6\" (UID: \"bce9e327-048f-4606-bab7-8600a88f13d6\") " Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.219126 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-utilities" (OuterVolumeSpecName: "utilities") pod "bce9e327-048f-4606-bab7-8600a88f13d6" (UID: "bce9e327-048f-4606-bab7-8600a88f13d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.223540 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce9e327-048f-4606-bab7-8600a88f13d6-kube-api-access-kqdws" (OuterVolumeSpecName: "kube-api-access-kqdws") pod "bce9e327-048f-4606-bab7-8600a88f13d6" (UID: "bce9e327-048f-4606-bab7-8600a88f13d6"). InnerVolumeSpecName "kube-api-access-kqdws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.319893 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqdws\" (UniqueName: \"kubernetes.io/projected/bce9e327-048f-4606-bab7-8600a88f13d6-kube-api-access-kqdws\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.319938 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.338107 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bce9e327-048f-4606-bab7-8600a88f13d6" (UID: "bce9e327-048f-4606-bab7-8600a88f13d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.421582 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce9e327-048f-4606-bab7-8600a88f13d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.547401 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdkwg" event={"ID":"bce9e327-048f-4606-bab7-8600a88f13d6","Type":"ContainerDied","Data":"4c4a057fc5c240259c16c6456e412f6b7b95b89240f57c998da6f0b84ecc8181"} Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.547462 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdkwg" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.547512 4705 scope.go:117] "RemoveContainer" containerID="4515ce4f8ba4958c396d5f0a39ca3b5686faf856770cdcaedc0e2c272c5de65b" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.570523 4705 scope.go:117] "RemoveContainer" containerID="85446b674a808a6e7ebada5f5a402240e4a1df7eed0db97d8339f95a745d3f72" Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.590614 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdkwg"] Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.592698 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xdkwg"] Jan 24 07:54:49 crc kubenswrapper[4705]: I0124 07:54:49.608388 4705 scope.go:117] "RemoveContainer" containerID="9bbdffa282a6e0457123463588ca53c5c736c30f8dbd7cfbb1e7d3e6cfaf2d92" Jan 24 07:54:51 crc kubenswrapper[4705]: I0124 07:54:51.582417 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" path="/var/lib/kubelet/pods/bce9e327-048f-4606-bab7-8600a88f13d6/volumes" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.664816 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-pfqmg"] Jan 24 07:54:52 crc kubenswrapper[4705]: E0124 07:54:52.665171 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="extract-utilities" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.665191 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="extract-utilities" Jan 24 07:54:52 crc kubenswrapper[4705]: E0124 07:54:52.665220 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="extract-content" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.665233 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="extract-content" Jan 24 07:54:52 crc kubenswrapper[4705]: E0124 07:54:52.665249 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="registry-server" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.665262 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="registry-server" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.665437 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce9e327-048f-4606-bab7-8600a88f13d6" containerName="registry-server" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.666452 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.668258 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pzmfx" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.677868 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-pfqmg"] Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.688228 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc"] Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.689028 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.698118 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.703412 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-79zvw"] Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.704676 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.707637 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc"] Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.763416 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfsw\" (UniqueName: \"kubernetes.io/projected/f63e787b-b789-4a2f-a0f4-fa433cefe73c-kube-api-access-ljfsw\") pod \"nmstate-metrics-54757c584b-pfqmg\" (UID: \"f63e787b-b789-4a2f-a0f4-fa433cefe73c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.799072 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf"] Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.799912 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.802672 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.803376 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.803635 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bvxnb" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.809696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf"] Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.865102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-ovs-socket\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.865172 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclpn\" (UniqueName: \"kubernetes.io/projected/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-kube-api-access-kclpn\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.865200 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f9ff4190-6e7e-4e11-8287-1f8c6aa35088-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-h4stc\" (UID: \"f9ff4190-6e7e-4e11-8287-1f8c6aa35088\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.865314 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-dbus-socket\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.865355 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5757l\" (UniqueName: \"kubernetes.io/projected/f9ff4190-6e7e-4e11-8287-1f8c6aa35088-kube-api-access-5757l\") pod \"nmstate-webhook-8474b5b9d8-h4stc\" (UID: \"f9ff4190-6e7e-4e11-8287-1f8c6aa35088\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.865468 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-nmstate-lock\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.865506 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljfsw\" (UniqueName: \"kubernetes.io/projected/f63e787b-b789-4a2f-a0f4-fa433cefe73c-kube-api-access-ljfsw\") pod \"nmstate-metrics-54757c584b-pfqmg\" (UID: \"f63e787b-b789-4a2f-a0f4-fa433cefe73c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.891006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljfsw\" (UniqueName: \"kubernetes.io/projected/f63e787b-b789-4a2f-a0f4-fa433cefe73c-kube-api-access-ljfsw\") pod \"nmstate-metrics-54757c584b-pfqmg\" (UID: \"f63e787b-b789-4a2f-a0f4-fa433cefe73c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967264 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kclpn\" (UniqueName: \"kubernetes.io/projected/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-kube-api-access-kclpn\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967317 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f9ff4190-6e7e-4e11-8287-1f8c6aa35088-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-h4stc\" (UID: \"f9ff4190-6e7e-4e11-8287-1f8c6aa35088\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-dbus-socket\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967392 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5757l\" (UniqueName: \"kubernetes.io/projected/f9ff4190-6e7e-4e11-8287-1f8c6aa35088-kube-api-access-5757l\") pod \"nmstate-webhook-8474b5b9d8-h4stc\" (UID: \"f9ff4190-6e7e-4e11-8287-1f8c6aa35088\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhpw\" (UniqueName: \"kubernetes.io/projected/54c5f862-8725-4f20-9624-c854d2b48634-kube-api-access-wlhpw\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967464 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/54c5f862-8725-4f20-9624-c854d2b48634-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967507 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-nmstate-lock\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967554 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c5f862-8725-4f20-9624-c854d2b48634-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967582 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-ovs-socket\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.967930 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-dbus-socket\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.968151 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-nmstate-lock\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.968192 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-ovs-socket\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.972134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f9ff4190-6e7e-4e11-8287-1f8c6aa35088-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-h4stc\" (UID: \"f9ff4190-6e7e-4e11-8287-1f8c6aa35088\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.985477 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.993736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5757l\" (UniqueName: \"kubernetes.io/projected/f9ff4190-6e7e-4e11-8287-1f8c6aa35088-kube-api-access-5757l\") pod \"nmstate-webhook-8474b5b9d8-h4stc\" (UID: \"f9ff4190-6e7e-4e11-8287-1f8c6aa35088\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:52 crc kubenswrapper[4705]: I0124 07:54:52.996177 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fd69f64b7-6qpp8"] Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.005103 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.011615 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclpn\" (UniqueName: \"kubernetes.io/projected/8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb-kube-api-access-kclpn\") pod \"nmstate-handler-79zvw\" (UID: \"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb\") " pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.014497 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.015584 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd69f64b7-6qpp8"] Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.030697 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.068691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c5f862-8725-4f20-9624-c854d2b48634-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.068788 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhpw\" (UniqueName: \"kubernetes.io/projected/54c5f862-8725-4f20-9624-c854d2b48634-kube-api-access-wlhpw\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.068811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/54c5f862-8725-4f20-9624-c854d2b48634-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.070269 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/54c5f862-8725-4f20-9624-c854d2b48634-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:53 crc kubenswrapper[4705]: E0124 07:54:53.070364 4705 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 24 07:54:53 crc kubenswrapper[4705]: E0124 07:54:53.070411 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54c5f862-8725-4f20-9624-c854d2b48634-plugin-serving-cert podName:54c5f862-8725-4f20-9624-c854d2b48634 nodeName:}" failed. No retries permitted until 2026-01-24 07:54:53.570395318 +0000 UTC m=+832.290268606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/54c5f862-8725-4f20-9624-c854d2b48634-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-zc7nf" (UID: "54c5f862-8725-4f20-9624-c854d2b48634") : secret "plugin-serving-cert" not found Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.097278 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhpw\" (UniqueName: \"kubernetes.io/projected/54c5f862-8725-4f20-9624-c854d2b48634-kube-api-access-wlhpw\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.169993 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmzn\" (UniqueName: \"kubernetes.io/projected/3109ebee-5906-4e1e-a66b-63c3ea05ff18-kube-api-access-bxmzn\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.170079 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-oauth-serving-cert\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.170109 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-config\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.170178 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-oauth-config\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.170726 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-serving-cert\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.170790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-trusted-ca-bundle\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.170908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-service-ca\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.272004 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-serving-cert\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.272051 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-trusted-ca-bundle\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.272069 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-service-ca\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.272139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxmzn\" (UniqueName: \"kubernetes.io/projected/3109ebee-5906-4e1e-a66b-63c3ea05ff18-kube-api-access-bxmzn\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.272162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-oauth-serving-cert\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.272180 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-config\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.272206 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-oauth-config\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.273878 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-service-ca\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.274096 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-trusted-ca-bundle\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.274198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-config\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.275138 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3109ebee-5906-4e1e-a66b-63c3ea05ff18-oauth-serving-cert\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.275327 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-oauth-config\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.279500 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3109ebee-5906-4e1e-a66b-63c3ea05ff18-console-serving-cert\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.292763 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxmzn\" (UniqueName: \"kubernetes.io/projected/3109ebee-5906-4e1e-a66b-63c3ea05ff18-kube-api-access-bxmzn\") pod \"console-6fd69f64b7-6qpp8\" (UID: \"3109ebee-5906-4e1e-a66b-63c3ea05ff18\") " pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.292892 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc"] Jan 24 07:54:53 crc kubenswrapper[4705]: W0124 07:54:53.297411 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9ff4190_6e7e_4e11_8287_1f8c6aa35088.slice/crio-8846ce2a5e44ce7bf9b656ce5491f55b124bb5ea4abd6ced5d759a7fa8f482f9 WatchSource:0}: Error finding container 8846ce2a5e44ce7bf9b656ce5491f55b124bb5ea4abd6ced5d759a7fa8f482f9: Status 404 returned error can't find the container with id 8846ce2a5e44ce7bf9b656ce5491f55b124bb5ea4abd6ced5d759a7fa8f482f9 Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.398572 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.437873 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-pfqmg"] Jan 24 07:54:53 crc kubenswrapper[4705]: W0124 07:54:53.452954 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63e787b_b789_4a2f_a0f4_fa433cefe73c.slice/crio-779b9a313ea66967ad0bbf60c04d9b4260a9fdc07bace3afbcfcfc823cd147dc WatchSource:0}: Error finding container 779b9a313ea66967ad0bbf60c04d9b4260a9fdc07bace3afbcfcfc823cd147dc: Status 404 returned error can't find the container with id 779b9a313ea66967ad0bbf60c04d9b4260a9fdc07bace3afbcfcfc823cd147dc Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.577671 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c5f862-8725-4f20-9624-c854d2b48634-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.583939 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c5f862-8725-4f20-9624-c854d2b48634-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zc7nf\" (UID: \"54c5f862-8725-4f20-9624-c854d2b48634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:53 crc kubenswrapper[4705]: W0124 07:54:53.587752 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3109ebee_5906_4e1e_a66b_63c3ea05ff18.slice/crio-9fa9acf55bf06211ee1bba02c6c39b3003e883ab1524c9245a224affbd84fff5 WatchSource:0}: Error finding container 9fa9acf55bf06211ee1bba02c6c39b3003e883ab1524c9245a224affbd84fff5: Status 404 returned error can't find the container with id 9fa9acf55bf06211ee1bba02c6c39b3003e883ab1524c9245a224affbd84fff5 Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.589344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" event={"ID":"f63e787b-b789-4a2f-a0f4-fa433cefe73c","Type":"ContainerStarted","Data":"779b9a313ea66967ad0bbf60c04d9b4260a9fdc07bace3afbcfcfc823cd147dc"} Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.589408 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd69f64b7-6qpp8"] Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.589424 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" event={"ID":"f9ff4190-6e7e-4e11-8287-1f8c6aa35088","Type":"ContainerStarted","Data":"8846ce2a5e44ce7bf9b656ce5491f55b124bb5ea4abd6ced5d759a7fa8f482f9"} Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.590255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-79zvw" event={"ID":"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb","Type":"ContainerStarted","Data":"cc99261ce0a08a6a7c1301fb10c0f3a34a88e52f0296711c47e85cc6cdd7c4f7"} Jan 24 07:54:53 crc kubenswrapper[4705]: I0124 07:54:53.714755 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" Jan 24 07:54:54 crc kubenswrapper[4705]: I0124 07:54:54.114573 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf"] Jan 24 07:54:54 crc kubenswrapper[4705]: W0124 07:54:54.125458 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54c5f862_8725_4f20_9624_c854d2b48634.slice/crio-1bf96e135db7a35566c33101eb0ba2c1b5c685577766f096ba8ecff265c17065 WatchSource:0}: Error finding container 1bf96e135db7a35566c33101eb0ba2c1b5c685577766f096ba8ecff265c17065: Status 404 returned error can't find the container with id 1bf96e135db7a35566c33101eb0ba2c1b5c685577766f096ba8ecff265c17065 Jan 24 07:54:54 crc kubenswrapper[4705]: I0124 07:54:54.599710 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" event={"ID":"54c5f862-8725-4f20-9624-c854d2b48634","Type":"ContainerStarted","Data":"1bf96e135db7a35566c33101eb0ba2c1b5c685577766f096ba8ecff265c17065"} Jan 24 07:54:54 crc kubenswrapper[4705]: I0124 07:54:54.601147 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd69f64b7-6qpp8" event={"ID":"3109ebee-5906-4e1e-a66b-63c3ea05ff18","Type":"ContainerStarted","Data":"9d19b97cf9021181b57ac9c260919e50277a8c29c199902c392219426cca7190"} Jan 24 07:54:54 crc kubenswrapper[4705]: I0124 07:54:54.601176 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd69f64b7-6qpp8" event={"ID":"3109ebee-5906-4e1e-a66b-63c3ea05ff18","Type":"ContainerStarted","Data":"9fa9acf55bf06211ee1bba02c6c39b3003e883ab1524c9245a224affbd84fff5"} Jan 24 07:54:54 crc kubenswrapper[4705]: I0124 07:54:54.632846 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fd69f64b7-6qpp8" podStartSLOduration=2.632802551 podStartE2EDuration="2.632802551s" podCreationTimestamp="2026-01-24 07:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:54:54.624869215 +0000 UTC m=+833.344742503" watchObservedRunningTime="2026-01-24 07:54:54.632802551 +0000 UTC m=+833.352675839" Jan 24 07:54:56 crc kubenswrapper[4705]: I0124 07:54:56.615857 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-79zvw" event={"ID":"8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb","Type":"ContainerStarted","Data":"27c5c6673b68b7ed3ed7db66b95501d6bd35186abe56a35dfa5d37b2fa9a9029"} Jan 24 07:54:56 crc kubenswrapper[4705]: I0124 07:54:56.616520 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:54:56 crc kubenswrapper[4705]: I0124 07:54:56.617606 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" event={"ID":"f63e787b-b789-4a2f-a0f4-fa433cefe73c","Type":"ContainerStarted","Data":"9e1a29e11e014241f42d89ab343b4c5b278f1fd9f911baf644e59ac16084307f"} Jan 24 07:54:56 crc kubenswrapper[4705]: I0124 07:54:56.619915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" event={"ID":"f9ff4190-6e7e-4e11-8287-1f8c6aa35088","Type":"ContainerStarted","Data":"b42c8b784121010a423f67853e80c66c5282565be5385374606cd1d2293b7974"} Jan 24 07:54:56 crc kubenswrapper[4705]: I0124 07:54:56.620151 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:54:56 crc kubenswrapper[4705]: I0124 07:54:56.639128 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-79zvw" podStartSLOduration=2.231111102 podStartE2EDuration="4.639107708s" podCreationTimestamp="2026-01-24 07:54:52 +0000 UTC" firstStartedPulling="2026-01-24 07:54:53.081111805 +0000 UTC m=+831.800985093" lastFinishedPulling="2026-01-24 07:54:55.489108411 +0000 UTC m=+834.208981699" observedRunningTime="2026-01-24 07:54:56.637135212 +0000 UTC m=+835.357008510" watchObservedRunningTime="2026-01-24 07:54:56.639107708 +0000 UTC m=+835.358981006" Jan 24 07:54:56 crc kubenswrapper[4705]: I0124 07:54:56.660165 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" podStartSLOduration=2.469304695 podStartE2EDuration="4.66014399s" podCreationTimestamp="2026-01-24 07:54:52 +0000 UTC" firstStartedPulling="2026-01-24 07:54:53.299290095 +0000 UTC m=+832.019163383" lastFinishedPulling="2026-01-24 07:54:55.49012939 +0000 UTC m=+834.210002678" observedRunningTime="2026-01-24 07:54:56.656505696 +0000 UTC m=+835.376378984" watchObservedRunningTime="2026-01-24 07:54:56.66014399 +0000 UTC m=+835.380017278" Jan 24 07:54:57 crc kubenswrapper[4705]: I0124 07:54:57.631120 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" event={"ID":"54c5f862-8725-4f20-9624-c854d2b48634","Type":"ContainerStarted","Data":"1f9a4c1b3b17b88d03a7a1ba82e2fa3495159021d3d12bb8c0446a8d53996bd5"} Jan 24 07:54:57 crc kubenswrapper[4705]: I0124 07:54:57.653290 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc7nf" podStartSLOduration=3.308780384 podStartE2EDuration="5.653269412s" podCreationTimestamp="2026-01-24 07:54:52 +0000 UTC" firstStartedPulling="2026-01-24 07:54:54.127787239 +0000 UTC m=+832.847660527" lastFinishedPulling="2026-01-24 07:54:56.472276267 +0000 UTC m=+835.192149555" observedRunningTime="2026-01-24 07:54:57.648765123 +0000 UTC m=+836.368638411" watchObservedRunningTime="2026-01-24 07:54:57.653269412 +0000 UTC m=+836.373142700" Jan 24 07:54:58 crc kubenswrapper[4705]: I0124 07:54:58.638707 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" event={"ID":"f63e787b-b789-4a2f-a0f4-fa433cefe73c","Type":"ContainerStarted","Data":"f818407841d61d33ecf38941c36cc3ea4a50b29504ea860b5a68177206c4cc92"} Jan 24 07:54:58 crc kubenswrapper[4705]: I0124 07:54:58.673888 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-pfqmg" podStartSLOduration=2.545637759 podStartE2EDuration="6.67386433s" podCreationTimestamp="2026-01-24 07:54:52 +0000 UTC" firstStartedPulling="2026-01-24 07:54:53.456637225 +0000 UTC m=+832.176510513" lastFinishedPulling="2026-01-24 07:54:57.584863806 +0000 UTC m=+836.304737084" observedRunningTime="2026-01-24 07:54:58.667852948 +0000 UTC m=+837.387726286" watchObservedRunningTime="2026-01-24 07:54:58.67386433 +0000 UTC m=+837.393737618" Jan 24 07:55:03 crc kubenswrapper[4705]: I0124 07:55:03.055756 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-79zvw" Jan 24 07:55:03 crc kubenswrapper[4705]: I0124 07:55:03.399049 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:55:03 crc kubenswrapper[4705]: I0124 07:55:03.399329 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:55:03 crc kubenswrapper[4705]: I0124 07:55:03.404706 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:55:03 crc kubenswrapper[4705]: I0124 07:55:03.668930 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fd69f64b7-6qpp8" Jan 24 07:55:03 crc kubenswrapper[4705]: I0124 07:55:03.726678 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n7xmf"] Jan 24 07:55:13 crc kubenswrapper[4705]: I0124 07:55:13.020562 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h4stc" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.330047 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp"] Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.331719 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.334166 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.344864 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp"] Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.494790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.494944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.495006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fxfr\" (UniqueName: \"kubernetes.io/projected/c6531f50-1387-4cfa-946d-ef139131e7d0-kube-api-access-6fxfr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.595804 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.595891 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.595953 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fxfr\" (UniqueName: \"kubernetes.io/projected/c6531f50-1387-4cfa-946d-ef139131e7d0-kube-api-access-6fxfr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.596393 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.596602 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.628325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fxfr\" (UniqueName: \"kubernetes.io/projected/c6531f50-1387-4cfa-946d-ef139131e7d0-kube-api-access-6fxfr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.647458 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.877233 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp"] Jan 24 07:55:26 crc kubenswrapper[4705]: I0124 07:55:26.998892 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" event={"ID":"c6531f50-1387-4cfa-946d-ef139131e7d0","Type":"ContainerStarted","Data":"eb08e9e7eace112345a86ad69534e5dcaf0b72e77a390599bc68455e6e509115"} Jan 24 07:55:28 crc kubenswrapper[4705]: I0124 07:55:28.016959 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerID="74a0ffdfa4ebe97bc7eea4b84a8b70c4befe6e74bc1aec865142dfec5bf4da6e" exitCode=0 Jan 24 07:55:28 crc kubenswrapper[4705]: I0124 07:55:28.017037 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" event={"ID":"c6531f50-1387-4cfa-946d-ef139131e7d0","Type":"ContainerDied","Data":"74a0ffdfa4ebe97bc7eea4b84a8b70c4befe6e74bc1aec865142dfec5bf4da6e"} Jan 24 07:55:28 crc kubenswrapper[4705]: I0124 07:55:28.766359 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-n7xmf" podUID="fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" containerName="console" containerID="cri-o://9cc1ae9a09dcbfe038c06abc106d698166b6b7a3bea1ee9f6454b458f51521d0" gracePeriod=15 Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.028723 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n7xmf_fffd44a1-fefc-43dc-81d7-7290a3f2d6cf/console/0.log" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.029010 4705 generic.go:334] "Generic (PLEG): container finished" podID="fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" containerID="9cc1ae9a09dcbfe038c06abc106d698166b6b7a3bea1ee9f6454b458f51521d0" exitCode=2 Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.029045 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n7xmf" event={"ID":"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf","Type":"ContainerDied","Data":"9cc1ae9a09dcbfe038c06abc106d698166b6b7a3bea1ee9f6454b458f51521d0"} Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.152008 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n7xmf_fffd44a1-fefc-43dc-81d7-7290a3f2d6cf/console/0.log" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.152077 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.230526 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-trusted-ca-bundle\") pod \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.230576 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-oauth-serving-cert\") pod \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.230610 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-oauth-config\") pod \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.230628 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-serving-cert\") pod \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.230712 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mbg8\" (UniqueName: \"kubernetes.io/projected/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-kube-api-access-7mbg8\") pod \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.230733 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-service-ca\") pod \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.230761 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-config\") pod \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\" (UID: \"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf\") " Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.231474 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" (UID: "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.231478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" (UID: "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.231573 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-config" (OuterVolumeSpecName: "console-config") pod "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" (UID: "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.231983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-service-ca" (OuterVolumeSpecName: "service-ca") pod "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" (UID: "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.237809 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" (UID: "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.277169 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" (UID: "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.277275 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-kube-api-access-7mbg8" (OuterVolumeSpecName: "kube-api-access-7mbg8") pod "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" (UID: "fffd44a1-fefc-43dc-81d7-7290a3f2d6cf"). InnerVolumeSpecName "kube-api-access-7mbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.331884 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mbg8\" (UniqueName: \"kubernetes.io/projected/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-kube-api-access-7mbg8\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.331919 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.331930 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.331938 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.331946 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.331955 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:29 crc kubenswrapper[4705]: I0124 07:55:29.331964 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.036666 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerID="935fbd36052d7196d701789dc7ad6a0bf2deaa8665acac7ae0f538697043e69a" exitCode=0 Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.036736 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" event={"ID":"c6531f50-1387-4cfa-946d-ef139131e7d0","Type":"ContainerDied","Data":"935fbd36052d7196d701789dc7ad6a0bf2deaa8665acac7ae0f538697043e69a"} Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.039304 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n7xmf_fffd44a1-fefc-43dc-81d7-7290a3f2d6cf/console/0.log" Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.039340 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n7xmf" event={"ID":"fffd44a1-fefc-43dc-81d7-7290a3f2d6cf","Type":"ContainerDied","Data":"6f65c273b1c4553e8d9011893a027e1be2ebf39a3606e40d444cd0e5499f6c8a"} Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.039367 4705 scope.go:117] "RemoveContainer" containerID="9cc1ae9a09dcbfe038c06abc106d698166b6b7a3bea1ee9f6454b458f51521d0" Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.039460 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n7xmf" Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.069729 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n7xmf"] Jan 24 07:55:30 crc kubenswrapper[4705]: I0124 07:55:30.073453 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-n7xmf"] Jan 24 07:55:31 crc kubenswrapper[4705]: I0124 07:55:31.048149 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerID="eb8ada5280b1c9b09e77c56469bc9fd17cd32e6feadbce2ad1d27a0cbc62328f" exitCode=0 Jan 24 07:55:31 crc kubenswrapper[4705]: I0124 07:55:31.048223 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" event={"ID":"c6531f50-1387-4cfa-946d-ef139131e7d0","Type":"ContainerDied","Data":"eb8ada5280b1c9b09e77c56469bc9fd17cd32e6feadbce2ad1d27a0cbc62328f"} Jan 24 07:55:31 crc kubenswrapper[4705]: I0124 07:55:31.583055 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" path="/var/lib/kubelet/pods/fffd44a1-fefc-43dc-81d7-7290a3f2d6cf/volumes" Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.260908 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.369530 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fxfr\" (UniqueName: \"kubernetes.io/projected/c6531f50-1387-4cfa-946d-ef139131e7d0-kube-api-access-6fxfr\") pod \"c6531f50-1387-4cfa-946d-ef139131e7d0\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.369598 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-bundle\") pod \"c6531f50-1387-4cfa-946d-ef139131e7d0\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.369723 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-util\") pod \"c6531f50-1387-4cfa-946d-ef139131e7d0\" (UID: \"c6531f50-1387-4cfa-946d-ef139131e7d0\") " Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.371448 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-bundle" (OuterVolumeSpecName: "bundle") pod "c6531f50-1387-4cfa-946d-ef139131e7d0" (UID: "c6531f50-1387-4cfa-946d-ef139131e7d0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.376167 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6531f50-1387-4cfa-946d-ef139131e7d0-kube-api-access-6fxfr" (OuterVolumeSpecName: "kube-api-access-6fxfr") pod "c6531f50-1387-4cfa-946d-ef139131e7d0" (UID: "c6531f50-1387-4cfa-946d-ef139131e7d0"). InnerVolumeSpecName "kube-api-access-6fxfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.384211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-util" (OuterVolumeSpecName: "util") pod "c6531f50-1387-4cfa-946d-ef139131e7d0" (UID: "c6531f50-1387-4cfa-946d-ef139131e7d0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.477283 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.477614 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fxfr\" (UniqueName: \"kubernetes.io/projected/c6531f50-1387-4cfa-946d-ef139131e7d0-kube-api-access-6fxfr\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:32 crc kubenswrapper[4705]: I0124 07:55:32.477661 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6531f50-1387-4cfa-946d-ef139131e7d0-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:55:33 crc kubenswrapper[4705]: I0124 07:55:33.061983 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" event={"ID":"c6531f50-1387-4cfa-946d-ef139131e7d0","Type":"ContainerDied","Data":"eb08e9e7eace112345a86ad69534e5dcaf0b72e77a390599bc68455e6e509115"} Jan 24 07:55:33 crc kubenswrapper[4705]: I0124 07:55:33.062032 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp" Jan 24 07:55:33 crc kubenswrapper[4705]: I0124 07:55:33.062033 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb08e9e7eace112345a86ad69534e5dcaf0b72e77a390599bc68455e6e509115" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.171930 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k"] Jan 24 07:55:42 crc kubenswrapper[4705]: E0124 07:55:42.172780 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" containerName="console" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.172796 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" containerName="console" Jan 24 07:55:42 crc kubenswrapper[4705]: E0124 07:55:42.172811 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerName="util" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.172839 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerName="util" Jan 24 07:55:42 crc kubenswrapper[4705]: E0124 07:55:42.172855 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerName="pull" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.172863 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerName="pull" Jan 24 07:55:42 crc kubenswrapper[4705]: E0124 07:55:42.172873 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerName="extract" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.172880 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerName="extract" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.173013 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fffd44a1-fefc-43dc-81d7-7290a3f2d6cf" containerName="console" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.173025 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6531f50-1387-4cfa-946d-ef139131e7d0" containerName="extract" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.173518 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.178463 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.179137 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.181171 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fjdm6" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.181330 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.183117 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.194300 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k"] Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.258417 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6709cabe-fa28-43e6-9999-2da688ab6871-webhook-cert\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.259181 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvp2m\" (UniqueName: \"kubernetes.io/projected/6709cabe-fa28-43e6-9999-2da688ab6871-kube-api-access-wvp2m\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.259226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6709cabe-fa28-43e6-9999-2da688ab6871-apiservice-cert\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.361031 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvp2m\" (UniqueName: \"kubernetes.io/projected/6709cabe-fa28-43e6-9999-2da688ab6871-kube-api-access-wvp2m\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.361092 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6709cabe-fa28-43e6-9999-2da688ab6871-apiservice-cert\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.361204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6709cabe-fa28-43e6-9999-2da688ab6871-webhook-cert\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.367626 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6709cabe-fa28-43e6-9999-2da688ab6871-webhook-cert\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.372865 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6709cabe-fa28-43e6-9999-2da688ab6871-apiservice-cert\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.387137 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvp2m\" (UniqueName: \"kubernetes.io/projected/6709cabe-fa28-43e6-9999-2da688ab6871-kube-api-access-wvp2m\") pod \"metallb-operator-controller-manager-5b969cdf7-4cn5k\" (UID: \"6709cabe-fa28-43e6-9999-2da688ab6871\") " pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.432833 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf"] Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.433570 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.435178 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.435555 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.435707 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-brr6r" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.458683 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf"] Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.493511 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.563473 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67f67aca-78d5-495d-a47d-ce2fdefc502b-apiservice-cert\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.563840 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmj7f\" (UniqueName: \"kubernetes.io/projected/67f67aca-78d5-495d-a47d-ce2fdefc502b-kube-api-access-dmj7f\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.563871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67f67aca-78d5-495d-a47d-ce2fdefc502b-webhook-cert\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.664841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmj7f\" (UniqueName: \"kubernetes.io/projected/67f67aca-78d5-495d-a47d-ce2fdefc502b-kube-api-access-dmj7f\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.664927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67f67aca-78d5-495d-a47d-ce2fdefc502b-webhook-cert\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.665020 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67f67aca-78d5-495d-a47d-ce2fdefc502b-apiservice-cert\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.675520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67f67aca-78d5-495d-a47d-ce2fdefc502b-webhook-cert\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.676150 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67f67aca-78d5-495d-a47d-ce2fdefc502b-apiservice-cert\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.685592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmj7f\" (UniqueName: \"kubernetes.io/projected/67f67aca-78d5-495d-a47d-ce2fdefc502b-kube-api-access-dmj7f\") pod \"metallb-operator-webhook-server-bfbbd9768-tw7cf\" (UID: \"67f67aca-78d5-495d-a47d-ce2fdefc502b\") " pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.747689 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:42 crc kubenswrapper[4705]: I0124 07:55:42.946412 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k"] Jan 24 07:55:42 crc kubenswrapper[4705]: W0124 07:55:42.954884 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6709cabe_fa28_43e6_9999_2da688ab6871.slice/crio-877c951fbad30eb405c03454291b9bfb60af6b64a4ed4c1210d58c823af5f019 WatchSource:0}: Error finding container 877c951fbad30eb405c03454291b9bfb60af6b64a4ed4c1210d58c823af5f019: Status 404 returned error can't find the container with id 877c951fbad30eb405c03454291b9bfb60af6b64a4ed4c1210d58c823af5f019 Jan 24 07:55:43 crc kubenswrapper[4705]: I0124 07:55:43.015939 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf"] Jan 24 07:55:43 crc kubenswrapper[4705]: W0124 07:55:43.026445 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67f67aca_78d5_495d_a47d_ce2fdefc502b.slice/crio-b368c1863c9b6208f63408bb2ba3c9570867693ed5e98295b9776b5261a4c36f WatchSource:0}: Error finding container b368c1863c9b6208f63408bb2ba3c9570867693ed5e98295b9776b5261a4c36f: Status 404 returned error can't find the container with id b368c1863c9b6208f63408bb2ba3c9570867693ed5e98295b9776b5261a4c36f Jan 24 07:55:43 crc kubenswrapper[4705]: I0124 07:55:43.113486 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" event={"ID":"67f67aca-78d5-495d-a47d-ce2fdefc502b","Type":"ContainerStarted","Data":"b368c1863c9b6208f63408bb2ba3c9570867693ed5e98295b9776b5261a4c36f"} Jan 24 07:55:43 crc kubenswrapper[4705]: I0124 07:55:43.114549 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" event={"ID":"6709cabe-fa28-43e6-9999-2da688ab6871","Type":"ContainerStarted","Data":"877c951fbad30eb405c03454291b9bfb60af6b64a4ed4c1210d58c823af5f019"} Jan 24 07:55:48 crc kubenswrapper[4705]: I0124 07:55:48.142841 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" event={"ID":"6709cabe-fa28-43e6-9999-2da688ab6871","Type":"ContainerStarted","Data":"d37a0bb5948c7d32012de1fd57dc42b3547895bff9442b30b299100384f0088b"} Jan 24 07:55:48 crc kubenswrapper[4705]: I0124 07:55:48.143412 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:55:48 crc kubenswrapper[4705]: I0124 07:55:48.145004 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" event={"ID":"67f67aca-78d5-495d-a47d-ce2fdefc502b","Type":"ContainerStarted","Data":"d041cb312947586aff69b55cd2eb3dbffa0faf22b1c2ce334b58c41bf19cb0b5"} Jan 24 07:55:48 crc kubenswrapper[4705]: I0124 07:55:48.145169 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:55:48 crc kubenswrapper[4705]: I0124 07:55:48.169269 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" podStartSLOduration=1.404307 podStartE2EDuration="6.16925039s" podCreationTimestamp="2026-01-24 07:55:42 +0000 UTC" firstStartedPulling="2026-01-24 07:55:42.959024835 +0000 UTC m=+881.678898123" lastFinishedPulling="2026-01-24 07:55:47.723968225 +0000 UTC m=+886.443841513" observedRunningTime="2026-01-24 07:55:48.167648464 +0000 UTC m=+886.887521762" watchObservedRunningTime="2026-01-24 07:55:48.16925039 +0000 UTC m=+886.889123678" Jan 24 07:55:48 crc kubenswrapper[4705]: I0124 07:55:48.188083 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" podStartSLOduration=1.4747667039999999 podStartE2EDuration="6.188062678s" podCreationTimestamp="2026-01-24 07:55:42 +0000 UTC" firstStartedPulling="2026-01-24 07:55:43.029110589 +0000 UTC m=+881.748983877" lastFinishedPulling="2026-01-24 07:55:47.742406563 +0000 UTC m=+886.462279851" observedRunningTime="2026-01-24 07:55:48.186786911 +0000 UTC m=+886.906660219" watchObservedRunningTime="2026-01-24 07:55:48.188062678 +0000 UTC m=+886.907935966" Jan 24 07:56:02 crc kubenswrapper[4705]: I0124 07:56:02.753342 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-bfbbd9768-tw7cf" Jan 24 07:56:22 crc kubenswrapper[4705]: I0124 07:56:22.495712 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b969cdf7-4cn5k" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.371487 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb"] Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.373886 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.383247 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vpqbf" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.388020 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.391298 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-crt2p"] Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.394118 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.395692 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb"] Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.396288 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.396785 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.479627 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ff8n7"] Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.486831 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9bd\" (UniqueName: \"kubernetes.io/projected/d9a20c75-aae2-4b5c-9c15-615590399718-kube-api-access-zh9bd\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.486885 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh42l\" (UniqueName: \"kubernetes.io/projected/7ed76590-151f-416d-b485-dd0ec7a67fcc-kube-api-access-nh42l\") pod \"frr-k8s-webhook-server-7df86c4f6c-47zdb\" (UID: \"7ed76590-151f-416d-b485-dd0ec7a67fcc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.486933 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d9a20c75-aae2-4b5c-9c15-615590399718-frr-startup\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.486955 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-frr-conf\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.486988 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a20c75-aae2-4b5c-9c15-615590399718-metrics-certs\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.487014 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-metrics\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.487034 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ed76590-151f-416d-b485-dd0ec7a67fcc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-47zdb\" (UID: \"7ed76590-151f-416d-b485-dd0ec7a67fcc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.487064 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-reloader\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.487167 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-frr-sockets\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.496098 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.499305 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.499850 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dwsk6" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.500270 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.502796 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.504941 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-zq9jr"] Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.506201 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.509140 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.524568 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zq9jr"] Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.588861 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d9a20c75-aae2-4b5c-9c15-615590399718-frr-startup\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6cgb\" (UniqueName: \"kubernetes.io/projected/b43877b9-1325-4a18-abe6-0aea41048802-kube-api-access-m6cgb\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-frr-conf\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589231 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a20c75-aae2-4b5c-9c15-615590399718-metrics-certs\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589263 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-metrics\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589320 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ed76590-151f-416d-b485-dd0ec7a67fcc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-47zdb\" (UID: \"7ed76590-151f-416d-b485-dd0ec7a67fcc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-reloader\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589373 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-frr-sockets\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94hx\" (UniqueName: \"kubernetes.io/projected/24055761-2526-4195-98fd-ba2b83bc9f20-kube-api-access-q94hx\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589425 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24055761-2526-4195-98fd-ba2b83bc9f20-metrics-certs\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589452 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9bd\" (UniqueName: \"kubernetes.io/projected/d9a20c75-aae2-4b5c-9c15-615590399718-kube-api-access-zh9bd\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589482 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh42l\" (UniqueName: \"kubernetes.io/projected/7ed76590-151f-416d-b485-dd0ec7a67fcc-kube-api-access-nh42l\") pod \"frr-k8s-webhook-server-7df86c4f6c-47zdb\" (UID: \"7ed76590-151f-416d-b485-dd0ec7a67fcc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589497 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-metrics-certs\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.589522 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b43877b9-1325-4a18-abe6-0aea41048802-metallb-excludel2\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.590352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24055761-2526-4195-98fd-ba2b83bc9f20-cert\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.591851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d9a20c75-aae2-4b5c-9c15-615590399718-frr-startup\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.592121 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-frr-conf\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: E0124 07:56:23.592200 4705 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 24 07:56:23 crc kubenswrapper[4705]: E0124 07:56:23.592249 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a20c75-aae2-4b5c-9c15-615590399718-metrics-certs podName:d9a20c75-aae2-4b5c-9c15-615590399718 nodeName:}" failed. No retries permitted until 2026-01-24 07:56:24.092234344 +0000 UTC m=+922.812107622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a20c75-aae2-4b5c-9c15-615590399718-metrics-certs") pod "frr-k8s-crt2p" (UID: "d9a20c75-aae2-4b5c-9c15-615590399718") : secret "frr-k8s-certs-secret" not found Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.592442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-metrics\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.593236 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-reloader\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.593489 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d9a20c75-aae2-4b5c-9c15-615590399718-frr-sockets\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.605230 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ed76590-151f-416d-b485-dd0ec7a67fcc-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-47zdb\" (UID: \"7ed76590-151f-416d-b485-dd0ec7a67fcc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.612252 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9bd\" (UniqueName: \"kubernetes.io/projected/d9a20c75-aae2-4b5c-9c15-615590399718-kube-api-access-zh9bd\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.616089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh42l\" (UniqueName: \"kubernetes.io/projected/7ed76590-151f-416d-b485-dd0ec7a67fcc-kube-api-access-nh42l\") pod \"frr-k8s-webhook-server-7df86c4f6c-47zdb\" (UID: \"7ed76590-151f-416d-b485-dd0ec7a67fcc\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.691154 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6cgb\" (UniqueName: \"kubernetes.io/projected/b43877b9-1325-4a18-abe6-0aea41048802-kube-api-access-m6cgb\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.691252 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.691356 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q94hx\" (UniqueName: \"kubernetes.io/projected/24055761-2526-4195-98fd-ba2b83bc9f20-kube-api-access-q94hx\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.691384 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24055761-2526-4195-98fd-ba2b83bc9f20-metrics-certs\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.691427 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-metrics-certs\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.691489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b43877b9-1325-4a18-abe6-0aea41048802-metallb-excludel2\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.691513 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24055761-2526-4195-98fd-ba2b83bc9f20-cert\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: E0124 07:56:23.693283 4705 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 07:56:23 crc kubenswrapper[4705]: E0124 07:56:23.693376 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist podName:b43877b9-1325-4a18-abe6-0aea41048802 nodeName:}" failed. No retries permitted until 2026-01-24 07:56:24.193350895 +0000 UTC m=+922.913224273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist") pod "speaker-ff8n7" (UID: "b43877b9-1325-4a18-abe6-0aea41048802") : secret "metallb-memberlist" not found Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.693565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b43877b9-1325-4a18-abe6-0aea41048802-metallb-excludel2\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.694808 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.695619 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-metrics-certs\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.704764 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24055761-2526-4195-98fd-ba2b83bc9f20-metrics-certs\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.706129 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.711761 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94hx\" (UniqueName: \"kubernetes.io/projected/24055761-2526-4195-98fd-ba2b83bc9f20-kube-api-access-q94hx\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.712892 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6cgb\" (UniqueName: \"kubernetes.io/projected/b43877b9-1325-4a18-abe6-0aea41048802-kube-api-access-m6cgb\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.729125 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24055761-2526-4195-98fd-ba2b83bc9f20-cert\") pod \"controller-6968d8fdc4-zq9jr\" (UID: \"24055761-2526-4195-98fd-ba2b83bc9f20\") " pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.841700 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:23 crc kubenswrapper[4705]: I0124 07:56:23.933529 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb"] Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.057911 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zq9jr"] Jan 24 07:56:24 crc kubenswrapper[4705]: W0124 07:56:24.062225 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24055761_2526_4195_98fd_ba2b83bc9f20.slice/crio-4cc421bbaf57852e8df57d8fa04cee4b3c3e2d549f57df3bb2df15c4f7ee3b64 WatchSource:0}: Error finding container 4cc421bbaf57852e8df57d8fa04cee4b3c3e2d549f57df3bb2df15c4f7ee3b64: Status 404 returned error can't find the container with id 4cc421bbaf57852e8df57d8fa04cee4b3c3e2d549f57df3bb2df15c4f7ee3b64 Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.123304 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a20c75-aae2-4b5c-9c15-615590399718-metrics-certs\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.129161 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a20c75-aae2-4b5c-9c15-615590399718-metrics-certs\") pod \"frr-k8s-crt2p\" (UID: \"d9a20c75-aae2-4b5c-9c15-615590399718\") " pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.224960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:24 crc kubenswrapper[4705]: E0124 07:56:24.225508 4705 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 07:56:24 crc kubenswrapper[4705]: E0124 07:56:24.225627 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist podName:b43877b9-1325-4a18-abe6-0aea41048802 nodeName:}" failed. No retries permitted until 2026-01-24 07:56:25.225606382 +0000 UTC m=+923.945479670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist") pod "speaker-ff8n7" (UID: "b43877b9-1325-4a18-abe6-0aea41048802") : secret "metallb-memberlist" not found Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.315766 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.394048 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zq9jr" event={"ID":"24055761-2526-4195-98fd-ba2b83bc9f20","Type":"ContainerStarted","Data":"79030835c481eaec1d5fb89dcb8fe09e6939dbe12680efb622333e0812570db5"} Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.394106 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zq9jr" event={"ID":"24055761-2526-4195-98fd-ba2b83bc9f20","Type":"ContainerStarted","Data":"bbeb5157d151a53391bcacab58b9d0b24904d18aac5fd06827980536eae4e183"} Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.394118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zq9jr" event={"ID":"24055761-2526-4195-98fd-ba2b83bc9f20","Type":"ContainerStarted","Data":"4cc421bbaf57852e8df57d8fa04cee4b3c3e2d549f57df3bb2df15c4f7ee3b64"} Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.394148 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.395367 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" event={"ID":"7ed76590-151f-416d-b485-dd0ec7a67fcc","Type":"ContainerStarted","Data":"0d982675529fab67a20883c361e81f0017742c570fbb65e464581686a20a5aed"} Jan 24 07:56:24 crc kubenswrapper[4705]: I0124 07:56:24.418255 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-zq9jr" podStartSLOduration=1.41824232 podStartE2EDuration="1.41824232s" podCreationTimestamp="2026-01-24 07:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:56:24.416553121 +0000 UTC m=+923.136426409" watchObservedRunningTime="2026-01-24 07:56:24.41824232 +0000 UTC m=+923.138115598" Jan 24 07:56:25 crc kubenswrapper[4705]: I0124 07:56:25.239024 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:25 crc kubenswrapper[4705]: I0124 07:56:25.244723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b43877b9-1325-4a18-abe6-0aea41048802-memberlist\") pod \"speaker-ff8n7\" (UID: \"b43877b9-1325-4a18-abe6-0aea41048802\") " pod="metallb-system/speaker-ff8n7" Jan 24 07:56:25 crc kubenswrapper[4705]: I0124 07:56:25.319161 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ff8n7" Jan 24 07:56:25 crc kubenswrapper[4705]: W0124 07:56:25.358745 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43877b9_1325_4a18_abe6_0aea41048802.slice/crio-dc73557a78e47b0c447a41bb67eb6c6b2ee1fb4e39efbc3df5f8c14286a94caa WatchSource:0}: Error finding container dc73557a78e47b0c447a41bb67eb6c6b2ee1fb4e39efbc3df5f8c14286a94caa: Status 404 returned error can't find the container with id dc73557a78e47b0c447a41bb67eb6c6b2ee1fb4e39efbc3df5f8c14286a94caa Jan 24 07:56:25 crc kubenswrapper[4705]: I0124 07:56:25.404898 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ff8n7" event={"ID":"b43877b9-1325-4a18-abe6-0aea41048802","Type":"ContainerStarted","Data":"dc73557a78e47b0c447a41bb67eb6c6b2ee1fb4e39efbc3df5f8c14286a94caa"} Jan 24 07:56:25 crc kubenswrapper[4705]: I0124 07:56:25.409964 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerStarted","Data":"97dce0df4e1a1d1b323b21bd7109e3485c21bdc4c6c486767c44f79bc20fae41"} Jan 24 07:56:26 crc kubenswrapper[4705]: I0124 07:56:26.433298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ff8n7" event={"ID":"b43877b9-1325-4a18-abe6-0aea41048802","Type":"ContainerStarted","Data":"42b706e54212d48bf4ce183e19e335bf9b24ca4a1d3aacf8fac03dd78222e8e1"} Jan 24 07:56:26 crc kubenswrapper[4705]: I0124 07:56:26.433648 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ff8n7" event={"ID":"b43877b9-1325-4a18-abe6-0aea41048802","Type":"ContainerStarted","Data":"3ef5b2e876f5f718c3cb4dd120d3b6b557f1e34b145a858c1897d71d88f26879"} Jan 24 07:56:26 crc kubenswrapper[4705]: I0124 07:56:26.433896 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ff8n7" Jan 24 07:56:26 crc kubenswrapper[4705]: I0124 07:56:26.452289 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ff8n7" podStartSLOduration=3.452273531 podStartE2EDuration="3.452273531s" podCreationTimestamp="2026-01-24 07:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:56:26.451102908 +0000 UTC m=+925.170976206" watchObservedRunningTime="2026-01-24 07:56:26.452273531 +0000 UTC m=+925.172146819" Jan 24 07:56:31 crc kubenswrapper[4705]: I0124 07:56:31.522159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" event={"ID":"7ed76590-151f-416d-b485-dd0ec7a67fcc","Type":"ContainerStarted","Data":"ce84a6c4e2fbdba3dbf145121b1d546847edf4a419244041eaad63fa470ddcc5"} Jan 24 07:56:31 crc kubenswrapper[4705]: I0124 07:56:31.524955 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:31 crc kubenswrapper[4705]: I0124 07:56:31.533405 4705 generic.go:334] "Generic (PLEG): container finished" podID="d9a20c75-aae2-4b5c-9c15-615590399718" containerID="45cc1073005c38c806058b175e5f5025dd43414369f7e588923bdea5bbd1c20f" exitCode=0 Jan 24 07:56:31 crc kubenswrapper[4705]: I0124 07:56:31.533465 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerDied","Data":"45cc1073005c38c806058b175e5f5025dd43414369f7e588923bdea5bbd1c20f"} Jan 24 07:56:31 crc kubenswrapper[4705]: I0124 07:56:31.552115 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" podStartSLOduration=1.471097552 podStartE2EDuration="8.552099207s" podCreationTimestamp="2026-01-24 07:56:23 +0000 UTC" firstStartedPulling="2026-01-24 07:56:23.948533054 +0000 UTC m=+922.668406342" lastFinishedPulling="2026-01-24 07:56:31.029534709 +0000 UTC m=+929.749407997" observedRunningTime="2026-01-24 07:56:31.550600694 +0000 UTC m=+930.270473982" watchObservedRunningTime="2026-01-24 07:56:31.552099207 +0000 UTC m=+930.271972495" Jan 24 07:56:32 crc kubenswrapper[4705]: I0124 07:56:32.540458 4705 generic.go:334] "Generic (PLEG): container finished" podID="d9a20c75-aae2-4b5c-9c15-615590399718" containerID="332d11b201a31facaaedf779b170e55c485f3513b538be5ead8897d2f4f3c46a" exitCode=0 Jan 24 07:56:32 crc kubenswrapper[4705]: I0124 07:56:32.540505 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerDied","Data":"332d11b201a31facaaedf779b170e55c485f3513b538be5ead8897d2f4f3c46a"} Jan 24 07:56:33 crc kubenswrapper[4705]: I0124 07:56:33.550790 4705 generic.go:334] "Generic (PLEG): container finished" podID="d9a20c75-aae2-4b5c-9c15-615590399718" containerID="dea65969a7c0c6ac356d2610b22f045a04e23c3c6d73e086896a144af2b9e90f" exitCode=0 Jan 24 07:56:33 crc kubenswrapper[4705]: I0124 07:56:33.550892 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerDied","Data":"dea65969a7c0c6ac356d2610b22f045a04e23c3c6d73e086896a144af2b9e90f"} Jan 24 07:56:34 crc kubenswrapper[4705]: I0124 07:56:34.558765 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerStarted","Data":"9b4fa1d53ce57ddbdf41b42abd545121abd2570c2a6b8be69ffd9df9cb561fe7"} Jan 24 07:56:34 crc kubenswrapper[4705]: I0124 07:56:34.559108 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerStarted","Data":"6b21346b38486734df052d80786a5009862a7cc83b1820d4fc5748b51e7fc581"} Jan 24 07:56:34 crc kubenswrapper[4705]: I0124 07:56:34.559119 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerStarted","Data":"31ef5cfeb520674b1991143ff203ca406a20ec590046948d8a2fca58699e2f84"} Jan 24 07:56:34 crc kubenswrapper[4705]: I0124 07:56:34.559127 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerStarted","Data":"bea98e211e5d1f92dd4dc5095afd326e7e51d227987f150af105dd0a41b874b1"} Jan 24 07:56:35 crc kubenswrapper[4705]: I0124 07:56:35.324774 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ff8n7" Jan 24 07:56:35 crc kubenswrapper[4705]: I0124 07:56:35.570641 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerStarted","Data":"3d315a297b216717071e7d3ff87c68dc9680d3e362cb59edcabd45eca3349966"} Jan 24 07:56:35 crc kubenswrapper[4705]: I0124 07:56:35.570683 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-crt2p" event={"ID":"d9a20c75-aae2-4b5c-9c15-615590399718","Type":"ContainerStarted","Data":"5fc51f7236ca60f1536fbd78e5c3cebe4db50ab8f3a55b30c2c28d58f73f67f8"} Jan 24 07:56:35 crc kubenswrapper[4705]: I0124 07:56:35.571585 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:35 crc kubenswrapper[4705]: I0124 07:56:35.594264 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-crt2p" podStartSLOduration=5.965137793 podStartE2EDuration="12.594243435s" podCreationTimestamp="2026-01-24 07:56:23 +0000 UTC" firstStartedPulling="2026-01-24 07:56:24.416047006 +0000 UTC m=+923.135920294" lastFinishedPulling="2026-01-24 07:56:31.045152638 +0000 UTC m=+929.765025936" observedRunningTime="2026-01-24 07:56:35.590905769 +0000 UTC m=+934.310779067" watchObservedRunningTime="2026-01-24 07:56:35.594243435 +0000 UTC m=+934.314116733" Jan 24 07:56:37 crc kubenswrapper[4705]: I0124 07:56:37.071679 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:56:37 crc kubenswrapper[4705]: I0124 07:56:37.071953 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.414783 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xxpwv"] Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.416179 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xxpwv" Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.419426 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.420513 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vt2ql" Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.421641 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.429072 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xxpwv"] Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.706304 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7smzb\" (UniqueName: \"kubernetes.io/projected/cfa96111-a597-405b-a026-d78eaa12859c-kube-api-access-7smzb\") pod \"openstack-operator-index-xxpwv\" (UID: \"cfa96111-a597-405b-a026-d78eaa12859c\") " pod="openstack-operators/openstack-operator-index-xxpwv" Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.809211 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7smzb\" (UniqueName: \"kubernetes.io/projected/cfa96111-a597-405b-a026-d78eaa12859c-kube-api-access-7smzb\") pod \"openstack-operator-index-xxpwv\" (UID: \"cfa96111-a597-405b-a026-d78eaa12859c\") " pod="openstack-operators/openstack-operator-index-xxpwv" Jan 24 07:56:38 crc kubenswrapper[4705]: I0124 07:56:38.834922 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7smzb\" (UniqueName: \"kubernetes.io/projected/cfa96111-a597-405b-a026-d78eaa12859c-kube-api-access-7smzb\") pod \"openstack-operator-index-xxpwv\" (UID: \"cfa96111-a597-405b-a026-d78eaa12859c\") " pod="openstack-operators/openstack-operator-index-xxpwv" Jan 24 07:56:39 crc kubenswrapper[4705]: I0124 07:56:39.033006 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xxpwv" Jan 24 07:56:39 crc kubenswrapper[4705]: I0124 07:56:39.224792 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xxpwv"] Jan 24 07:56:39 crc kubenswrapper[4705]: I0124 07:56:39.316246 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:39 crc kubenswrapper[4705]: I0124 07:56:39.359356 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:39 crc kubenswrapper[4705]: I0124 07:56:39.723255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xxpwv" event={"ID":"cfa96111-a597-405b-a026-d78eaa12859c","Type":"ContainerStarted","Data":"aec0c1960442cb9f872c0ecff4e15abb0ec8a5f96dbb7a3b464800476568a924"} Jan 24 07:56:41 crc kubenswrapper[4705]: I0124 07:56:41.591614 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xxpwv"] Jan 24 07:56:41 crc kubenswrapper[4705]: I0124 07:56:41.735118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xxpwv" event={"ID":"cfa96111-a597-405b-a026-d78eaa12859c","Type":"ContainerStarted","Data":"1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d"} Jan 24 07:56:41 crc kubenswrapper[4705]: I0124 07:56:41.748449 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xxpwv" podStartSLOduration=1.775478848 podStartE2EDuration="3.748427831s" podCreationTimestamp="2026-01-24 07:56:38 +0000 UTC" firstStartedPulling="2026-01-24 07:56:39.24249688 +0000 UTC m=+937.962370158" lastFinishedPulling="2026-01-24 07:56:41.215445853 +0000 UTC m=+939.935319141" observedRunningTime="2026-01-24 07:56:41.746056413 +0000 UTC m=+940.465929701" watchObservedRunningTime="2026-01-24 07:56:41.748427831 +0000 UTC m=+940.468301109" Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.193644 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fnqzn"] Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.194885 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.205276 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fnqzn"] Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.527322 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dlt\" (UniqueName: \"kubernetes.io/projected/508301de-d491-4dbb-9f4b-c2732d5007eb-kube-api-access-n9dlt\") pod \"openstack-operator-index-fnqzn\" (UID: \"508301de-d491-4dbb-9f4b-c2732d5007eb\") " pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.628572 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dlt\" (UniqueName: \"kubernetes.io/projected/508301de-d491-4dbb-9f4b-c2732d5007eb-kube-api-access-n9dlt\") pod \"openstack-operator-index-fnqzn\" (UID: \"508301de-d491-4dbb-9f4b-c2732d5007eb\") " pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.647735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dlt\" (UniqueName: \"kubernetes.io/projected/508301de-d491-4dbb-9f4b-c2732d5007eb-kube-api-access-n9dlt\") pod \"openstack-operator-index-fnqzn\" (UID: \"508301de-d491-4dbb-9f4b-c2732d5007eb\") " pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.740099 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xxpwv" podUID="cfa96111-a597-405b-a026-d78eaa12859c" containerName="registry-server" containerID="cri-o://1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d" gracePeriod=2 Jan 24 07:56:42 crc kubenswrapper[4705]: I0124 07:56:42.843793 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.155548 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xxpwv" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.339579 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7smzb\" (UniqueName: \"kubernetes.io/projected/cfa96111-a597-405b-a026-d78eaa12859c-kube-api-access-7smzb\") pod \"cfa96111-a597-405b-a026-d78eaa12859c\" (UID: \"cfa96111-a597-405b-a026-d78eaa12859c\") " Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.343116 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fnqzn"] Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.346046 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa96111-a597-405b-a026-d78eaa12859c-kube-api-access-7smzb" (OuterVolumeSpecName: "kube-api-access-7smzb") pod "cfa96111-a597-405b-a026-d78eaa12859c" (UID: "cfa96111-a597-405b-a026-d78eaa12859c"). InnerVolumeSpecName "kube-api-access-7smzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.441092 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7smzb\" (UniqueName: \"kubernetes.io/projected/cfa96111-a597-405b-a026-d78eaa12859c-kube-api-access-7smzb\") on node \"crc\" DevicePath \"\"" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.711703 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-47zdb" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.766135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fnqzn" event={"ID":"508301de-d491-4dbb-9f4b-c2732d5007eb","Type":"ContainerStarted","Data":"68279e9d045a26c54e8c4c84d7008cc43806ad07b7f6ddfd9225dcc22d01c24d"} Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.766199 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fnqzn" event={"ID":"508301de-d491-4dbb-9f4b-c2732d5007eb","Type":"ContainerStarted","Data":"dce881a7104d56903366e978b2f9a8d27d9f581a2d6ed1d88a6384a0693a76ea"} Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.768695 4705 generic.go:334] "Generic (PLEG): container finished" podID="cfa96111-a597-405b-a026-d78eaa12859c" containerID="1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d" exitCode=0 Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.768754 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xxpwv" event={"ID":"cfa96111-a597-405b-a026-d78eaa12859c","Type":"ContainerDied","Data":"1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d"} Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.768788 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xxpwv" event={"ID":"cfa96111-a597-405b-a026-d78eaa12859c","Type":"ContainerDied","Data":"aec0c1960442cb9f872c0ecff4e15abb0ec8a5f96dbb7a3b464800476568a924"} Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.768832 4705 scope.go:117] "RemoveContainer" containerID="1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.768953 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xxpwv" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.787906 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fnqzn" podStartSLOduration=1.717261756 podStartE2EDuration="1.787881789s" podCreationTimestamp="2026-01-24 07:56:42 +0000 UTC" firstStartedPulling="2026-01-24 07:56:43.356004064 +0000 UTC m=+942.075877352" lastFinishedPulling="2026-01-24 07:56:43.426624097 +0000 UTC m=+942.146497385" observedRunningTime="2026-01-24 07:56:43.782232517 +0000 UTC m=+942.502105805" watchObservedRunningTime="2026-01-24 07:56:43.787881789 +0000 UTC m=+942.507755077" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.800572 4705 scope.go:117] "RemoveContainer" containerID="1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.800671 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xxpwv"] Jan 24 07:56:43 crc kubenswrapper[4705]: E0124 07:56:43.801913 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d\": container with ID starting with 1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d not found: ID does not exist" containerID="1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.801972 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d"} err="failed to get container status \"1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d\": rpc error: code = NotFound desc = could not find container \"1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d\": container with ID starting with 1684e1fc8b579c8d2529076a050d1791d198e0c68466af9fcc9cc1f8e4a8a71d not found: ID does not exist" Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.804661 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xxpwv"] Jan 24 07:56:43 crc kubenswrapper[4705]: I0124 07:56:43.845808 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-zq9jr" Jan 24 07:56:44 crc kubenswrapper[4705]: I0124 07:56:44.325563 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-crt2p" Jan 24 07:56:45 crc kubenswrapper[4705]: I0124 07:56:45.597890 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa96111-a597-405b-a026-d78eaa12859c" path="/var/lib/kubelet/pods/cfa96111-a597-405b-a026-d78eaa12859c/volumes" Jan 24 07:56:47 crc kubenswrapper[4705]: I0124 07:56:47.996178 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vv6rb"] Jan 24 07:56:47 crc kubenswrapper[4705]: E0124 07:56:47.996740 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa96111-a597-405b-a026-d78eaa12859c" containerName="registry-server" Jan 24 07:56:47 crc kubenswrapper[4705]: I0124 07:56:47.996753 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa96111-a597-405b-a026-d78eaa12859c" containerName="registry-server" Jan 24 07:56:47 crc kubenswrapper[4705]: I0124 07:56:47.996892 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa96111-a597-405b-a026-d78eaa12859c" containerName="registry-server" Jan 24 07:56:47 crc kubenswrapper[4705]: I0124 07:56:47.997733 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.011146 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vv6rb"] Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.120203 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-catalog-content\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.120334 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-utilities\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.124061 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw75c\" (UniqueName: \"kubernetes.io/projected/e2122169-5921-4ae7-9c6e-069db3a511a2-kube-api-access-qw75c\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.225884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-catalog-content\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.225961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-utilities\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.226000 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw75c\" (UniqueName: \"kubernetes.io/projected/e2122169-5921-4ae7-9c6e-069db3a511a2-kube-api-access-qw75c\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.226532 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-catalog-content\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.226655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-utilities\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.246895 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw75c\" (UniqueName: \"kubernetes.io/projected/e2122169-5921-4ae7-9c6e-069db3a511a2-kube-api-access-qw75c\") pod \"community-operators-vv6rb\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.315744 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:48 crc kubenswrapper[4705]: I0124 07:56:48.836331 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vv6rb"] Jan 24 07:56:49 crc kubenswrapper[4705]: I0124 07:56:49.804908 4705 generic.go:334] "Generic (PLEG): container finished" podID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerID="393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518" exitCode=0 Jan 24 07:56:49 crc kubenswrapper[4705]: I0124 07:56:49.804972 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vv6rb" event={"ID":"e2122169-5921-4ae7-9c6e-069db3a511a2","Type":"ContainerDied","Data":"393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518"} Jan 24 07:56:49 crc kubenswrapper[4705]: I0124 07:56:49.805175 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vv6rb" event={"ID":"e2122169-5921-4ae7-9c6e-069db3a511a2","Type":"ContainerStarted","Data":"5aa0410b92ffa7fd5369496955049c4988e0e0d4185be9beb1d76aa0b3a9764e"} Jan 24 07:56:50 crc kubenswrapper[4705]: I0124 07:56:50.813579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vv6rb" event={"ID":"e2122169-5921-4ae7-9c6e-069db3a511a2","Type":"ContainerStarted","Data":"6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405"} Jan 24 07:56:51 crc kubenswrapper[4705]: I0124 07:56:51.822175 4705 generic.go:334] "Generic (PLEG): container finished" podID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerID="6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405" exitCode=0 Jan 24 07:56:51 crc kubenswrapper[4705]: I0124 07:56:51.822218 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vv6rb" event={"ID":"e2122169-5921-4ae7-9c6e-069db3a511a2","Type":"ContainerDied","Data":"6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405"} Jan 24 07:56:52 crc kubenswrapper[4705]: I0124 07:56:52.844870 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:52 crc kubenswrapper[4705]: I0124 07:56:52.845228 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:52 crc kubenswrapper[4705]: I0124 07:56:52.869382 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:53 crc kubenswrapper[4705]: I0124 07:56:53.836601 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vv6rb" event={"ID":"e2122169-5921-4ae7-9c6e-069db3a511a2","Type":"ContainerStarted","Data":"2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9"} Jan 24 07:56:53 crc kubenswrapper[4705]: I0124 07:56:53.859891 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vv6rb" podStartSLOduration=3.46485492 podStartE2EDuration="6.859868264s" podCreationTimestamp="2026-01-24 07:56:47 +0000 UTC" firstStartedPulling="2026-01-24 07:56:49.806322007 +0000 UTC m=+948.526195295" lastFinishedPulling="2026-01-24 07:56:53.201335361 +0000 UTC m=+951.921208639" observedRunningTime="2026-01-24 07:56:53.859645148 +0000 UTC m=+952.579518446" watchObservedRunningTime="2026-01-24 07:56:53.859868264 +0000 UTC m=+952.579741552" Jan 24 07:56:53 crc kubenswrapper[4705]: I0124 07:56:53.867544 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fnqzn" Jan 24 07:56:58 crc kubenswrapper[4705]: I0124 07:56:58.316988 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:58 crc kubenswrapper[4705]: I0124 07:56:58.317303 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:58 crc kubenswrapper[4705]: I0124 07:56:58.373141 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:58 crc kubenswrapper[4705]: I0124 07:56:58.904522 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:56:59 crc kubenswrapper[4705]: I0124 07:56:59.391209 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vv6rb"] Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.668687 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5"] Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.669973 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.675175 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f2ljc" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.691998 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5"] Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.779181 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-bundle\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.779241 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-util\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.779288 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lpjf\" (UniqueName: \"kubernetes.io/projected/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-kube-api-access-2lpjf\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.875449 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vv6rb" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="registry-server" containerID="cri-o://2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9" gracePeriod=2 Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.880362 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-bundle\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.880418 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-util\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.880454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lpjf\" (UniqueName: \"kubernetes.io/projected/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-kube-api-access-2lpjf\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.881178 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-bundle\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.881198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-util\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.903855 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lpjf\" (UniqueName: \"kubernetes.io/projected/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-kube-api-access-2lpjf\") pod \"3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:00 crc kubenswrapper[4705]: I0124 07:57:00.986729 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.217719 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5"] Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.687809 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.793054 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-utilities\") pod \"e2122169-5921-4ae7-9c6e-069db3a511a2\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.793189 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-catalog-content\") pod \"e2122169-5921-4ae7-9c6e-069db3a511a2\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.793261 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw75c\" (UniqueName: \"kubernetes.io/projected/e2122169-5921-4ae7-9c6e-069db3a511a2-kube-api-access-qw75c\") pod \"e2122169-5921-4ae7-9c6e-069db3a511a2\" (UID: \"e2122169-5921-4ae7-9c6e-069db3a511a2\") " Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.794994 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-utilities" (OuterVolumeSpecName: "utilities") pod "e2122169-5921-4ae7-9c6e-069db3a511a2" (UID: "e2122169-5921-4ae7-9c6e-069db3a511a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.802182 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2122169-5921-4ae7-9c6e-069db3a511a2-kube-api-access-qw75c" (OuterVolumeSpecName: "kube-api-access-qw75c") pod "e2122169-5921-4ae7-9c6e-069db3a511a2" (UID: "e2122169-5921-4ae7-9c6e-069db3a511a2"). InnerVolumeSpecName "kube-api-access-qw75c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.862590 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2122169-5921-4ae7-9c6e-069db3a511a2" (UID: "e2122169-5921-4ae7-9c6e-069db3a511a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.882642 4705 generic.go:334] "Generic (PLEG): container finished" podID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerID="7b1a77a44bc05821737c0ff451c28a696f94aaddaf27c445d2055db0d273a24c" exitCode=0 Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.882719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" event={"ID":"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67","Type":"ContainerDied","Data":"7b1a77a44bc05821737c0ff451c28a696f94aaddaf27c445d2055db0d273a24c"} Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.882748 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" event={"ID":"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67","Type":"ContainerStarted","Data":"7dc0b9717a6400291fb2074641aa6ace2fa129e91c682d183a9f92a39c5084a5"} Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.885453 4705 generic.go:334] "Generic (PLEG): container finished" podID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerID="2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9" exitCode=0 Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.885484 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vv6rb" event={"ID":"e2122169-5921-4ae7-9c6e-069db3a511a2","Type":"ContainerDied","Data":"2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9"} Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.885507 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vv6rb" event={"ID":"e2122169-5921-4ae7-9c6e-069db3a511a2","Type":"ContainerDied","Data":"5aa0410b92ffa7fd5369496955049c4988e0e0d4185be9beb1d76aa0b3a9764e"} Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.885527 4705 scope.go:117] "RemoveContainer" containerID="2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.885652 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vv6rb" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.898778 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.898880 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw75c\" (UniqueName: \"kubernetes.io/projected/e2122169-5921-4ae7-9c6e-069db3a511a2-kube-api-access-qw75c\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.898899 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2122169-5921-4ae7-9c6e-069db3a511a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.978003 4705 scope.go:117] "RemoveContainer" containerID="6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405" Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.993506 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vv6rb"] Jan 24 07:57:01 crc kubenswrapper[4705]: I0124 07:57:01.996011 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vv6rb"] Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.016780 4705 scope.go:117] "RemoveContainer" containerID="393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518" Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.136153 4705 scope.go:117] "RemoveContainer" containerID="2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9" Jan 24 07:57:02 crc kubenswrapper[4705]: E0124 07:57:02.136701 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9\": container with ID starting with 2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9 not found: ID does not exist" containerID="2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9" Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.136756 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9"} err="failed to get container status \"2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9\": rpc error: code = NotFound desc = could not find container \"2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9\": container with ID starting with 2a91d27d1c96e3c43add9cb5bc1e16ac0eef757012f1e42b1e9e78f86f4b1dd9 not found: ID does not exist" Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.136791 4705 scope.go:117] "RemoveContainer" containerID="6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405" Jan 24 07:57:02 crc kubenswrapper[4705]: E0124 07:57:02.141396 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405\": container with ID starting with 6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405 not found: ID does not exist" containerID="6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405" Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.141491 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405"} err="failed to get container status \"6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405\": rpc error: code = NotFound desc = could not find container \"6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405\": container with ID starting with 6b134e5f4c8722b62ae23fd132c0ef8aa29ea634820952b6a7e21b29612b4405 not found: ID does not exist" Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.141523 4705 scope.go:117] "RemoveContainer" containerID="393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518" Jan 24 07:57:02 crc kubenswrapper[4705]: E0124 07:57:02.145328 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518\": container with ID starting with 393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518 not found: ID does not exist" containerID="393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518" Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.146156 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518"} err="failed to get container status \"393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518\": rpc error: code = NotFound desc = could not find container \"393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518\": container with ID starting with 393ae4e7482c1cbb0ef0bcca5b899ca6fcb99d8c99ee5a9791f299197ca30518 not found: ID does not exist" Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.895038 4705 generic.go:334] "Generic (PLEG): container finished" podID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerID="fee33a9828450d3d7382b7f1b54ceeb613547fc9118a3f28e1da30eab867ee21" exitCode=0 Jan 24 07:57:02 crc kubenswrapper[4705]: I0124 07:57:02.895121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" event={"ID":"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67","Type":"ContainerDied","Data":"fee33a9828450d3d7382b7f1b54ceeb613547fc9118a3f28e1da30eab867ee21"} Jan 24 07:57:03 crc kubenswrapper[4705]: I0124 07:57:03.583571 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" path="/var/lib/kubelet/pods/e2122169-5921-4ae7-9c6e-069db3a511a2/volumes" Jan 24 07:57:03 crc kubenswrapper[4705]: I0124 07:57:03.904966 4705 generic.go:334] "Generic (PLEG): container finished" podID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerID="b7df0f6df4dbe4feb87ba7e716ae33c50eeddf757dcb95e4a3eecfb38482769d" exitCode=0 Jan 24 07:57:03 crc kubenswrapper[4705]: I0124 07:57:03.905065 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" event={"ID":"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67","Type":"ContainerDied","Data":"b7df0f6df4dbe4feb87ba7e716ae33c50eeddf757dcb95e4a3eecfb38482769d"} Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.140608 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.261018 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-bundle\") pod \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.261343 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-util\") pod \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.261537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lpjf\" (UniqueName: \"kubernetes.io/projected/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-kube-api-access-2lpjf\") pod \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\" (UID: \"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67\") " Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.261888 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-bundle" (OuterVolumeSpecName: "bundle") pod "ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" (UID: "ebc0e3ca-4a5f-4ae7-af73-92edccb58a67"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.268752 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-kube-api-access-2lpjf" (OuterVolumeSpecName: "kube-api-access-2lpjf") pod "ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" (UID: "ebc0e3ca-4a5f-4ae7-af73-92edccb58a67"). InnerVolumeSpecName "kube-api-access-2lpjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.274711 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-util" (OuterVolumeSpecName: "util") pod "ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" (UID: "ebc0e3ca-4a5f-4ae7-af73-92edccb58a67"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.362944 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lpjf\" (UniqueName: \"kubernetes.io/projected/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-kube-api-access-2lpjf\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.363480 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.363539 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc0e3ca-4a5f-4ae7-af73-92edccb58a67-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.922339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" event={"ID":"ebc0e3ca-4a5f-4ae7-af73-92edccb58a67","Type":"ContainerDied","Data":"7dc0b9717a6400291fb2074641aa6ace2fa129e91c682d183a9f92a39c5084a5"} Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.922386 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dc0b9717a6400291fb2074641aa6ace2fa129e91c682d183a9f92a39c5084a5" Jan 24 07:57:05 crc kubenswrapper[4705]: I0124 07:57:05.922392 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5" Jan 24 07:57:07 crc kubenswrapper[4705]: I0124 07:57:07.071379 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:57:07 crc kubenswrapper[4705]: I0124 07:57:07.071436 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.428518 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d"] Jan 24 07:57:09 crc kubenswrapper[4705]: E0124 07:57:09.429185 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="extract-content" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429203 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="extract-content" Jan 24 07:57:09 crc kubenswrapper[4705]: E0124 07:57:09.429216 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="registry-server" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429223 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="registry-server" Jan 24 07:57:09 crc kubenswrapper[4705]: E0124 07:57:09.429238 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="extract-utilities" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429246 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="extract-utilities" Jan 24 07:57:09 crc kubenswrapper[4705]: E0124 07:57:09.429261 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerName="pull" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429268 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerName="pull" Jan 24 07:57:09 crc kubenswrapper[4705]: E0124 07:57:09.429282 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerName="extract" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429289 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerName="extract" Jan 24 07:57:09 crc kubenswrapper[4705]: E0124 07:57:09.429300 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerName="util" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429307 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerName="util" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429436 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2122169-5921-4ae7-9c6e-069db3a511a2" containerName="registry-server" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429451 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc0e3ca-4a5f-4ae7-af73-92edccb58a67" containerName="extract" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.429994 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.432672 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sgwdc" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.504907 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d"] Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.619867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmv2\" (UniqueName: \"kubernetes.io/projected/1c03ba2e-ee1e-4afc-8f97-84439ceec36d-kube-api-access-cxmv2\") pod \"openstack-operator-controller-init-5f778d85fb-56s2d\" (UID: \"1c03ba2e-ee1e-4afc-8f97-84439ceec36d\") " pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.721407 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxmv2\" (UniqueName: \"kubernetes.io/projected/1c03ba2e-ee1e-4afc-8f97-84439ceec36d-kube-api-access-cxmv2\") pod \"openstack-operator-controller-init-5f778d85fb-56s2d\" (UID: \"1c03ba2e-ee1e-4afc-8f97-84439ceec36d\") " pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.740193 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxmv2\" (UniqueName: \"kubernetes.io/projected/1c03ba2e-ee1e-4afc-8f97-84439ceec36d-kube-api-access-cxmv2\") pod \"openstack-operator-controller-init-5f778d85fb-56s2d\" (UID: \"1c03ba2e-ee1e-4afc-8f97-84439ceec36d\") " pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" Jan 24 07:57:09 crc kubenswrapper[4705]: I0124 07:57:09.746836 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" Jan 24 07:57:10 crc kubenswrapper[4705]: I0124 07:57:10.245094 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d"] Jan 24 07:57:10 crc kubenswrapper[4705]: I0124 07:57:10.955584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" event={"ID":"1c03ba2e-ee1e-4afc-8f97-84439ceec36d","Type":"ContainerStarted","Data":"8b9a3dbb44f8356cb288a98bee19b60ed83f888494ae81fd8e40f174668ea20a"} Jan 24 07:57:13 crc kubenswrapper[4705]: I0124 07:57:13.981645 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" event={"ID":"1c03ba2e-ee1e-4afc-8f97-84439ceec36d","Type":"ContainerStarted","Data":"871d9476cef45d28e877bfa5ddafa982cf3ac2e8b56ad9a82dbc239dd34f3792"} Jan 24 07:57:13 crc kubenswrapper[4705]: I0124 07:57:13.981999 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" Jan 24 07:57:14 crc kubenswrapper[4705]: I0124 07:57:14.009615 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" podStartSLOduration=1.783778163 podStartE2EDuration="5.009593813s" podCreationTimestamp="2026-01-24 07:57:09 +0000 UTC" firstStartedPulling="2026-01-24 07:57:10.252499345 +0000 UTC m=+968.972372633" lastFinishedPulling="2026-01-24 07:57:13.478314995 +0000 UTC m=+972.198188283" observedRunningTime="2026-01-24 07:57:14.005926248 +0000 UTC m=+972.725799536" watchObservedRunningTime="2026-01-24 07:57:14.009593813 +0000 UTC m=+972.729467101" Jan 24 07:57:19 crc kubenswrapper[4705]: I0124 07:57:19.749788 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5f778d85fb-56s2d" Jan 24 07:57:28 crc kubenswrapper[4705]: I0124 07:57:28.953138 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nj75j"] Jan 24 07:57:28 crc kubenswrapper[4705]: I0124 07:57:28.956927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:28 crc kubenswrapper[4705]: I0124 07:57:28.961966 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj75j"] Jan 24 07:57:28 crc kubenswrapper[4705]: I0124 07:57:28.981781 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh9fv\" (UniqueName: \"kubernetes.io/projected/30708e5f-0ff8-489b-b77e-226517ce5645-kube-api-access-fh9fv\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:28 crc kubenswrapper[4705]: I0124 07:57:28.982201 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-utilities\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:28 crc kubenswrapper[4705]: I0124 07:57:28.982336 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-catalog-content\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.083585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-utilities\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.083655 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-catalog-content\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.083687 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh9fv\" (UniqueName: \"kubernetes.io/projected/30708e5f-0ff8-489b-b77e-226517ce5645-kube-api-access-fh9fv\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.084208 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-utilities\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.084261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-catalog-content\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.109074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh9fv\" (UniqueName: \"kubernetes.io/projected/30708e5f-0ff8-489b-b77e-226517ce5645-kube-api-access-fh9fv\") pod \"redhat-marketplace-nj75j\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.278021 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:29 crc kubenswrapper[4705]: I0124 07:57:29.828896 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj75j"] Jan 24 07:57:30 crc kubenswrapper[4705]: I0124 07:57:30.069135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerStarted","Data":"6a74280053814f799637d0395e3ac29420200d09a21c57d8cd2e9dc7281b6608"} Jan 24 07:57:30 crc kubenswrapper[4705]: I0124 07:57:30.069178 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerStarted","Data":"64139f95c9d54d4ef9d106993f4a2c8f6fc1cb2bc638bed317cc8dd78c4be199"} Jan 24 07:57:31 crc kubenswrapper[4705]: I0124 07:57:31.078280 4705 generic.go:334] "Generic (PLEG): container finished" podID="30708e5f-0ff8-489b-b77e-226517ce5645" containerID="6a74280053814f799637d0395e3ac29420200d09a21c57d8cd2e9dc7281b6608" exitCode=0 Jan 24 07:57:31 crc kubenswrapper[4705]: I0124 07:57:31.078583 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerDied","Data":"6a74280053814f799637d0395e3ac29420200d09a21c57d8cd2e9dc7281b6608"} Jan 24 07:57:33 crc kubenswrapper[4705]: I0124 07:57:33.091716 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerStarted","Data":"5bbc7c21e6b4c8a96218a65ece8f349dab2118b953907766aa10788d438e27cb"} Jan 24 07:57:34 crc kubenswrapper[4705]: I0124 07:57:34.101286 4705 generic.go:334] "Generic (PLEG): container finished" podID="30708e5f-0ff8-489b-b77e-226517ce5645" containerID="5bbc7c21e6b4c8a96218a65ece8f349dab2118b953907766aa10788d438e27cb" exitCode=0 Jan 24 07:57:34 crc kubenswrapper[4705]: I0124 07:57:34.101377 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerDied","Data":"5bbc7c21e6b4c8a96218a65ece8f349dab2118b953907766aa10788d438e27cb"} Jan 24 07:57:35 crc kubenswrapper[4705]: I0124 07:57:35.109873 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerStarted","Data":"d674f6dc3648e78b0a6ead9b80b125964ce74379bc30431c1aecd3843751dda7"} Jan 24 07:57:35 crc kubenswrapper[4705]: I0124 07:57:35.135540 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nj75j" podStartSLOduration=3.70464152 podStartE2EDuration="7.135522986s" podCreationTimestamp="2026-01-24 07:57:28 +0000 UTC" firstStartedPulling="2026-01-24 07:57:31.080658972 +0000 UTC m=+989.800532260" lastFinishedPulling="2026-01-24 07:57:34.511540438 +0000 UTC m=+993.231413726" observedRunningTime="2026-01-24 07:57:35.129856173 +0000 UTC m=+993.849729461" watchObservedRunningTime="2026-01-24 07:57:35.135522986 +0000 UTC m=+993.855396274" Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.071147 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.071487 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.071538 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.072198 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4157e341864c82a6048118572db8f63cf29b32c144fe523434dcc318955e439e"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.072260 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://4157e341864c82a6048118572db8f63cf29b32c144fe523434dcc318955e439e" gracePeriod=600 Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.306052 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="4157e341864c82a6048118572db8f63cf29b32c144fe523434dcc318955e439e" exitCode=0 Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.306097 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"4157e341864c82a6048118572db8f63cf29b32c144fe523434dcc318955e439e"} Jan 24 07:57:37 crc kubenswrapper[4705]: I0124 07:57:37.306128 4705 scope.go:117] "RemoveContainer" containerID="49dd615a3faac21183e353fd2b4521164856375addc3eb5e62c6cd13a36cfc96" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.659730 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.660922 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.662321 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2f8zq" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.666810 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.668130 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.671074 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8974r" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.673758 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.674517 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.679250 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5b49g" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.681721 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtszx\" (UniqueName: \"kubernetes.io/projected/0a119afa-9520-46bc-8fde-0b2974035e48-kube-api-access-rtszx\") pod \"barbican-operator-controller-manager-7f86f8796f-nf4zc\" (UID: \"0a119afa-9520-46bc-8fde-0b2974035e48\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.681776 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzwt9\" (UniqueName: \"kubernetes.io/projected/91182c35-90b8-409a-ac96-191c754f5c9d-kube-api-access-xzwt9\") pod \"designate-operator-controller-manager-b45d7bf98-dbvkx\" (UID: \"91182c35-90b8-409a-ac96-191c754f5c9d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.681903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csc2r\" (UniqueName: \"kubernetes.io/projected/652fc521-e0f0-4d0c-8ca3-8077222ab892-kube-api-access-csc2r\") pod \"cinder-operator-controller-manager-69cf5d4557-r5j5v\" (UID: \"652fc521-e0f0-4d0c-8ca3-8077222ab892\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.688859 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.692976 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.697676 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.698507 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.703265 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-s7xdj" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.706871 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.707778 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.710989 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-chfrb" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.718428 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.731031 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.744391 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.745433 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.764213 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zzdlr" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.780227 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.783780 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.784370 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csc2r\" (UniqueName: \"kubernetes.io/projected/652fc521-e0f0-4d0c-8ca3-8077222ab892-kube-api-access-csc2r\") pod \"cinder-operator-controller-manager-69cf5d4557-r5j5v\" (UID: \"652fc521-e0f0-4d0c-8ca3-8077222ab892\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.784439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtszx\" (UniqueName: \"kubernetes.io/projected/0a119afa-9520-46bc-8fde-0b2974035e48-kube-api-access-rtszx\") pod \"barbican-operator-controller-manager-7f86f8796f-nf4zc\" (UID: \"0a119afa-9520-46bc-8fde-0b2974035e48\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.784497 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzwt9\" (UniqueName: \"kubernetes.io/projected/91182c35-90b8-409a-ac96-191c754f5c9d-kube-api-access-xzwt9\") pod \"designate-operator-controller-manager-b45d7bf98-dbvkx\" (UID: \"91182c35-90b8-409a-ac96-191c754f5c9d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.823858 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.825218 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.832234 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csc2r\" (UniqueName: \"kubernetes.io/projected/652fc521-e0f0-4d0c-8ca3-8077222ab892-kube-api-access-csc2r\") pod \"cinder-operator-controller-manager-69cf5d4557-r5j5v\" (UID: \"652fc521-e0f0-4d0c-8ca3-8077222ab892\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.832614 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cfgtv" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.834981 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.837516 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.838269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.877630 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzwt9\" (UniqueName: \"kubernetes.io/projected/91182c35-90b8-409a-ac96-191c754f5c9d-kube-api-access-xzwt9\") pod \"designate-operator-controller-manager-b45d7bf98-dbvkx\" (UID: \"91182c35-90b8-409a-ac96-191c754f5c9d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.878273 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lf45z" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.879393 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtszx\" (UniqueName: \"kubernetes.io/projected/0a119afa-9520-46bc-8fde-0b2974035e48-kube-api-access-rtszx\") pod \"barbican-operator-controller-manager-7f86f8796f-nf4zc\" (UID: \"0a119afa-9520-46bc-8fde-0b2974035e48\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.885375 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2hv\" (UniqueName: \"kubernetes.io/projected/be549f5c-a477-4e7d-a928-0e9885ffa225-kube-api-access-2d2hv\") pod \"horizon-operator-controller-manager-77d5c5b54f-sj4dw\" (UID: \"be549f5c-a477-4e7d-a928-0e9885ffa225\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.885455 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bb64\" (UniqueName: \"kubernetes.io/projected/338f4812-65cb-4a3e-a83e-73a72e4f31eb-kube-api-access-5bb64\") pod \"heat-operator-controller-manager-594c8c9d5d-sjc8r\" (UID: \"338f4812-65cb-4a3e-a83e-73a72e4f31eb\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.885525 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccmk\" (UniqueName: \"kubernetes.io/projected/93151962-475c-412e-98d3-7363d8fd5f6c-kube-api-access-nccmk\") pod \"glance-operator-controller-manager-78fdd796fd-xsz7p\" (UID: \"93151962-475c-412e-98d3-7363d8fd5f6c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.888653 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.897104 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.905585 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.906403 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.908446 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8h8mj" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.927601 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.928664 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.931341 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rfgr4" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.946382 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf"] Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.986147 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.986902 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckh7d\" (UniqueName: \"kubernetes.io/projected/bef91cd6-2f77-474f-8258-e23ca5b37091-kube-api-access-ckh7d\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.986957 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.987371 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d2hv\" (UniqueName: \"kubernetes.io/projected/be549f5c-a477-4e7d-a928-0e9885ffa225-kube-api-access-2d2hv\") pod \"horizon-operator-controller-manager-77d5c5b54f-sj4dw\" (UID: \"be549f5c-a477-4e7d-a928-0e9885ffa225\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.987480 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkh26\" (UniqueName: \"kubernetes.io/projected/241de282-17c7-48c1-b4cb-fbeb9b98bd08-kube-api-access-vkh26\") pod \"ironic-operator-controller-manager-598f7747c9-v629x\" (UID: \"241de282-17c7-48c1-b4cb-fbeb9b98bd08\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.987583 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bb64\" (UniqueName: \"kubernetes.io/projected/338f4812-65cb-4a3e-a83e-73a72e4f31eb-kube-api-access-5bb64\") pod \"heat-operator-controller-manager-594c8c9d5d-sjc8r\" (UID: \"338f4812-65cb-4a3e-a83e-73a72e4f31eb\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" Jan 24 07:57:38 crc kubenswrapper[4705]: I0124 07:57:38.988043 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nccmk\" (UniqueName: \"kubernetes.io/projected/93151962-475c-412e-98d3-7363d8fd5f6c-kube-api-access-nccmk\") pod \"glance-operator-controller-manager-78fdd796fd-xsz7p\" (UID: \"93151962-475c-412e-98d3-7363d8fd5f6c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.002891 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.003804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.011216 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.021440 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.022058 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.022678 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5fpv5" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.037149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d2hv\" (UniqueName: \"kubernetes.io/projected/be549f5c-a477-4e7d-a928-0e9885ffa225-kube-api-access-2d2hv\") pod \"horizon-operator-controller-manager-77d5c5b54f-sj4dw\" (UID: \"be549f5c-a477-4e7d-a928-0e9885ffa225\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.039757 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bb64\" (UniqueName: \"kubernetes.io/projected/338f4812-65cb-4a3e-a83e-73a72e4f31eb-kube-api-access-5bb64\") pod \"heat-operator-controller-manager-594c8c9d5d-sjc8r\" (UID: \"338f4812-65cb-4a3e-a83e-73a72e4f31eb\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.052889 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nccmk\" (UniqueName: \"kubernetes.io/projected/93151962-475c-412e-98d3-7363d8fd5f6c-kube-api-access-nccmk\") pod \"glance-operator-controller-manager-78fdd796fd-xsz7p\" (UID: \"93151962-475c-412e-98d3-7363d8fd5f6c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.068092 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.069353 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.092640 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9ct\" (UniqueName: \"kubernetes.io/projected/fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e-kube-api-access-4q9ct\") pod \"manila-operator-controller-manager-78c6999f6f-s86rp\" (UID: \"fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.092741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckh7d\" (UniqueName: \"kubernetes.io/projected/bef91cd6-2f77-474f-8258-e23ca5b37091-kube-api-access-ckh7d\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.092780 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.092836 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkh26\" (UniqueName: \"kubernetes.io/projected/241de282-17c7-48c1-b4cb-fbeb9b98bd08-kube-api-access-vkh26\") pod \"ironic-operator-controller-manager-598f7747c9-v629x\" (UID: \"241de282-17c7-48c1-b4cb-fbeb9b98bd08\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.092863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbc2x\" (UniqueName: \"kubernetes.io/projected/23f7495d-06eb-45e5-b5e6-e50169760b0b-kube-api-access-gbc2x\") pod \"keystone-operator-controller-manager-b8b6d4659-tgzdf\" (UID: \"23f7495d-06eb-45e5-b5e6-e50169760b0b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.093304 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.093361 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert podName:bef91cd6-2f77-474f-8258-e23ca5b37091 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:39.593341726 +0000 UTC m=+998.313215014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert") pod "infra-operator-controller-manager-694cf4f878-l4fkg" (UID: "bef91cd6-2f77-474f-8258-e23ca5b37091") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.095165 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.106406 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.107813 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.110173 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dnwgb" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.124202 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.155148 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.158089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkh26\" (UniqueName: \"kubernetes.io/projected/241de282-17c7-48c1-b4cb-fbeb9b98bd08-kube-api-access-vkh26\") pod \"ironic-operator-controller-manager-598f7747c9-v629x\" (UID: \"241de282-17c7-48c1-b4cb-fbeb9b98bd08\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.189713 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckh7d\" (UniqueName: \"kubernetes.io/projected/bef91cd6-2f77-474f-8258-e23ca5b37091-kube-api-access-ckh7d\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.194083 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q9ct\" (UniqueName: \"kubernetes.io/projected/fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e-kube-api-access-4q9ct\") pod \"manila-operator-controller-manager-78c6999f6f-s86rp\" (UID: \"fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.194455 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qhd\" (UniqueName: \"kubernetes.io/projected/49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead-kube-api-access-k4qhd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-x5h78\" (UID: \"49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.207509 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.194591 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbc2x\" (UniqueName: \"kubernetes.io/projected/23f7495d-06eb-45e5-b5e6-e50169760b0b-kube-api-access-gbc2x\") pod \"keystone-operator-controller-manager-b8b6d4659-tgzdf\" (UID: \"23f7495d-06eb-45e5-b5e6-e50169760b0b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.238876 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.239241 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.252455 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-952vn" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.257268 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.265397 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.271259 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-svw4v" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.274151 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.278625 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbc2x\" (UniqueName: \"kubernetes.io/projected/23f7495d-06eb-45e5-b5e6-e50169760b0b-kube-api-access-gbc2x\") pod \"keystone-operator-controller-manager-b8b6d4659-tgzdf\" (UID: \"23f7495d-06eb-45e5-b5e6-e50169760b0b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.279134 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.279956 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.281469 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q9ct\" (UniqueName: \"kubernetes.io/projected/fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e-kube-api-access-4q9ct\") pod \"manila-operator-controller-manager-78c6999f6f-s86rp\" (UID: \"fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.289445 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.291141 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.295113 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.300665 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.303277 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.303524 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-82gr2" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.326425 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.327455 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.336659 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mskhj" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.340596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p78q2\" (UniqueName: \"kubernetes.io/projected/3c52d864-16a1-4eb6-80e9-ac7e5009bbd9-kube-api-access-p78q2\") pod \"octavia-operator-controller-manager-7bd9774b6-869gl\" (UID: \"3c52d864-16a1-4eb6-80e9-ac7e5009bbd9\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.340692 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4qhd\" (UniqueName: \"kubernetes.io/projected/49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead-kube-api-access-k4qhd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-x5h78\" (UID: \"49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.340722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rr6k\" (UniqueName: \"kubernetes.io/projected/5a7f4747-1fd9-4aa3-b954-e32101ebe927-kube-api-access-7rr6k\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.340748 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4pl\" (UniqueName: \"kubernetes.io/projected/b14e84b5-9dcb-4280-9480-a6f34bf8c8dd-kube-api-access-dk4pl\") pod \"nova-operator-controller-manager-6b8bc8d87d-drzkh\" (UID: \"b14e84b5-9dcb-4280-9480-a6f34bf8c8dd\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.340769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.340810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxqh\" (UniqueName: \"kubernetes.io/projected/bf85561a-7710-4a15-b4b1-c8f48e50dc53-kube-api-access-vcxqh\") pod \"neutron-operator-controller-manager-78d58447c5-mpgjf\" (UID: \"bf85561a-7710-4a15-b4b1-c8f48e50dc53\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.341835 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.342755 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.359811 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-rc8n9" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.369055 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.378180 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.378362 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4qhd\" (UniqueName: \"kubernetes.io/projected/49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead-kube-api-access-k4qhd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-x5h78\" (UID: \"49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.392104 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"853514deca6f38cd0a77ff6aa66eff5f7cb660b73f8271ebb43497a216af6f05"} Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.405936 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.407764 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.413220 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-92wn5" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.424462 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.426225 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.435085 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.436112 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9224\" (UniqueName: \"kubernetes.io/projected/404be92b-a12e-42d7-868f-adf825bc7c68-kube-api-access-p9224\") pod \"swift-operator-controller-manager-547cbdb99f-f2vcw\" (UID: \"404be92b-a12e-42d7-868f-adf825bc7c68\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443251 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rr6k\" (UniqueName: \"kubernetes.io/projected/5a7f4747-1fd9-4aa3-b954-e32101ebe927-kube-api-access-7rr6k\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443288 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4pl\" (UniqueName: \"kubernetes.io/projected/b14e84b5-9dcb-4280-9480-a6f34bf8c8dd-kube-api-access-dk4pl\") pod \"nova-operator-controller-manager-6b8bc8d87d-drzkh\" (UID: \"b14e84b5-9dcb-4280-9480-a6f34bf8c8dd\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcxqh\" (UniqueName: \"kubernetes.io/projected/bf85561a-7710-4a15-b4b1-c8f48e50dc53-kube-api-access-vcxqh\") pod \"neutron-operator-controller-manager-78d58447c5-mpgjf\" (UID: \"bf85561a-7710-4a15-b4b1-c8f48e50dc53\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443382 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhpx\" (UniqueName: \"kubernetes.io/projected/eb05abb5-cee5-4e0d-9217-6154aebe5836-kube-api-access-9jhpx\") pod \"placement-operator-controller-manager-5d646b7d76-k8q6j\" (UID: \"eb05abb5-cee5-4e0d-9217-6154aebe5836\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443446 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf5qb\" (UniqueName: \"kubernetes.io/projected/2e973d30-3868-4922-b576-12587d46810a-kube-api-access-pf5qb\") pod \"ovn-operator-controller-manager-55db956ddc-6445j\" (UID: \"2e973d30-3868-4922-b576-12587d46810a\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443456 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hk2xr" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.443470 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p78q2\" (UniqueName: \"kubernetes.io/projected/3c52d864-16a1-4eb6-80e9-ac7e5009bbd9-kube-api-access-p78q2\") pod \"octavia-operator-controller-manager-7bd9774b6-869gl\" (UID: \"3c52d864-16a1-4eb6-80e9-ac7e5009bbd9\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.444995 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.445149 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert podName:5a7f4747-1fd9-4aa3-b954-e32101ebe927 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:39.945093905 +0000 UTC m=+998.664967193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" (UID: "5a7f4747-1fd9-4aa3-b954-e32101ebe927") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.454973 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.456411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.470198 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-c8k8d" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.482568 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcxqh\" (UniqueName: \"kubernetes.io/projected/bf85561a-7710-4a15-b4b1-c8f48e50dc53-kube-api-access-vcxqh\") pod \"neutron-operator-controller-manager-78d58447c5-mpgjf\" (UID: \"bf85561a-7710-4a15-b4b1-c8f48e50dc53\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.482664 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.483881 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4pl\" (UniqueName: \"kubernetes.io/projected/b14e84b5-9dcb-4280-9480-a6f34bf8c8dd-kube-api-access-dk4pl\") pod \"nova-operator-controller-manager-6b8bc8d87d-drzkh\" (UID: \"b14e84b5-9dcb-4280-9480-a6f34bf8c8dd\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.493909 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rr6k\" (UniqueName: \"kubernetes.io/projected/5a7f4747-1fd9-4aa3-b954-e32101ebe927-kube-api-access-7rr6k\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.500540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p78q2\" (UniqueName: \"kubernetes.io/projected/3c52d864-16a1-4eb6-80e9-ac7e5009bbd9-kube-api-access-p78q2\") pod \"octavia-operator-controller-manager-7bd9774b6-869gl\" (UID: \"3c52d864-16a1-4eb6-80e9-ac7e5009bbd9\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.500902 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.516708 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.527094 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.528269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.535289 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-z45s2" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.539726 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.540076 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.540114 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.541907 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.547639 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9224\" (UniqueName: \"kubernetes.io/projected/404be92b-a12e-42d7-868f-adf825bc7c68-kube-api-access-p9224\") pod \"swift-operator-controller-manager-547cbdb99f-f2vcw\" (UID: \"404be92b-a12e-42d7-868f-adf825bc7c68\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.547693 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgqz\" (UniqueName: \"kubernetes.io/projected/f5382856-3a6e-4d10-beb2-9df688e2f6c7-kube-api-access-glgqz\") pod \"telemetry-operator-controller-manager-7c64596589-v9zxl\" (UID: \"f5382856-3a6e-4d10-beb2-9df688e2f6c7\") " pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.547763 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhpx\" (UniqueName: \"kubernetes.io/projected/eb05abb5-cee5-4e0d-9217-6154aebe5836-kube-api-access-9jhpx\") pod \"placement-operator-controller-manager-5d646b7d76-k8q6j\" (UID: \"eb05abb5-cee5-4e0d-9217-6154aebe5836\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.547796 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z76d5\" (UniqueName: \"kubernetes.io/projected/a08c7b5c-356a-4a05-a600-82f6bf5aad91-kube-api-access-z76d5\") pod \"test-operator-controller-manager-69797bbcbd-c6x57\" (UID: \"a08c7b5c-356a-4a05-a600-82f6bf5aad91\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.547851 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf5qb\" (UniqueName: \"kubernetes.io/projected/2e973d30-3868-4922-b576-12587d46810a-kube-api-access-pf5qb\") pod \"ovn-operator-controller-manager-55db956ddc-6445j\" (UID: \"2e973d30-3868-4922-b576-12587d46810a\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.589393 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf5qb\" (UniqueName: \"kubernetes.io/projected/2e973d30-3868-4922-b576-12587d46810a-kube-api-access-pf5qb\") pod \"ovn-operator-controller-manager-55db956ddc-6445j\" (UID: \"2e973d30-3868-4922-b576-12587d46810a\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.589930 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.595736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9224\" (UniqueName: \"kubernetes.io/projected/404be92b-a12e-42d7-868f-adf825bc7c68-kube-api-access-p9224\") pod \"swift-operator-controller-manager-547cbdb99f-f2vcw\" (UID: \"404be92b-a12e-42d7-868f-adf825bc7c68\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.612033 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhpx\" (UniqueName: \"kubernetes.io/projected/eb05abb5-cee5-4e0d-9217-6154aebe5836-kube-api-access-9jhpx\") pod \"placement-operator-controller-manager-5d646b7d76-k8q6j\" (UID: \"eb05abb5-cee5-4e0d-9217-6154aebe5836\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.618948 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.625847 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.632599 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.633461 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.635301 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.635646 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.660689 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.661083 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgqz\" (UniqueName: \"kubernetes.io/projected/f5382856-3a6e-4d10-beb2-9df688e2f6c7-kube-api-access-glgqz\") pod \"telemetry-operator-controller-manager-7c64596589-v9zxl\" (UID: \"f5382856-3a6e-4d10-beb2-9df688e2f6c7\") " pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.661165 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtq9\" (UniqueName: \"kubernetes.io/projected/0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36-kube-api-access-ggtq9\") pod \"watcher-operator-controller-manager-6d9458688d-5nngs\" (UID: \"0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.661208 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z76d5\" (UniqueName: \"kubernetes.io/projected/a08c7b5c-356a-4a05-a600-82f6bf5aad91-kube-api-access-z76d5\") pod \"test-operator-controller-manager-69797bbcbd-c6x57\" (UID: \"a08c7b5c-356a-4a05-a600-82f6bf5aad91\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.661966 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.662193 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert podName:bef91cd6-2f77-474f-8258-e23ca5b37091 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:40.662012891 +0000 UTC m=+999.381886179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert") pod "infra-operator-controller-manager-694cf4f878-l4fkg" (UID: "bef91cd6-2f77-474f-8258-e23ca5b37091") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.662290 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bpdlx" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.694627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgqz\" (UniqueName: \"kubernetes.io/projected/f5382856-3a6e-4d10-beb2-9df688e2f6c7-kube-api-access-glgqz\") pod \"telemetry-operator-controller-manager-7c64596589-v9zxl\" (UID: \"f5382856-3a6e-4d10-beb2-9df688e2f6c7\") " pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.698701 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.701952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z76d5\" (UniqueName: \"kubernetes.io/projected/a08c7b5c-356a-4a05-a600-82f6bf5aad91-kube-api-access-z76d5\") pod \"test-operator-controller-manager-69797bbcbd-c6x57\" (UID: \"a08c7b5c-356a-4a05-a600-82f6bf5aad91\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.775740 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.776758 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.786397 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m7jb\" (UniqueName: \"kubernetes.io/projected/51813a09-552b-4f12-904a-840cf6829c80-kube-api-access-8m7jb\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.786442 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtq9\" (UniqueName: \"kubernetes.io/projected/0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36-kube-api-access-ggtq9\") pod \"watcher-operator-controller-manager-6d9458688d-5nngs\" (UID: \"0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.786487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.786529 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.793071 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.794095 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.796678 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-f89vs" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.801531 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9"] Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.818609 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtq9\" (UniqueName: \"kubernetes.io/projected/0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36-kube-api-access-ggtq9\") pod \"watcher-operator-controller-manager-6d9458688d-5nngs\" (UID: \"0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.887542 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.887620 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.887765 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m7jb\" (UniqueName: \"kubernetes.io/projected/51813a09-552b-4f12-904a-840cf6829c80-kube-api-access-8m7jb\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.890364 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.890433 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:40.390413139 +0000 UTC m=+999.110286427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.890659 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.890702 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:40.390689896 +0000 UTC m=+999.110563184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "metrics-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.908634 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m7jb\" (UniqueName: \"kubernetes.io/projected/51813a09-552b-4f12-904a-840cf6829c80-kube-api-access-8m7jb\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.956339 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.989522 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:39 crc kubenswrapper[4705]: I0124 07:57:39.989574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fmg\" (UniqueName: \"kubernetes.io/projected/3fac85c2-ff36-44a9-ae92-947df3332178-kube-api-access-n8fmg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d4kz9\" (UID: \"3fac85c2-ff36-44a9-ae92-947df3332178\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.990408 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:39 crc kubenswrapper[4705]: E0124 07:57:39.990489 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert podName:5a7f4747-1fd9-4aa3-b954-e32101ebe927 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:40.99047075 +0000 UTC m=+999.710344038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" (UID: "5a7f4747-1fd9-4aa3-b954-e32101ebe927") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.004423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.025978 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.030809 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.091911 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fmg\" (UniqueName: \"kubernetes.io/projected/3fac85c2-ff36-44a9-ae92-947df3332178-kube-api-access-n8fmg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d4kz9\" (UID: \"3fac85c2-ff36-44a9-ae92-947df3332178\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.122483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fmg\" (UniqueName: \"kubernetes.io/projected/3fac85c2-ff36-44a9-ae92-947df3332178-kube-api-access-n8fmg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d4kz9\" (UID: \"3fac85c2-ff36-44a9-ae92-947df3332178\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.141682 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" Jan 24 07:57:40 crc kubenswrapper[4705]: W0124 07:57:40.171283 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a119afa_9520_46bc_8fde_0b2974035e48.slice/crio-4a6debacd8bca1cccd02baf1514c36064375c4e89770a52c87e2d8c38192c346 WatchSource:0}: Error finding container 4a6debacd8bca1cccd02baf1514c36064375c4e89770a52c87e2d8c38192c346: Status 404 returned error can't find the container with id 4a6debacd8bca1cccd02baf1514c36064375c4e89770a52c87e2d8c38192c346 Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.262433 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.270025 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx"] Jan 24 07:57:40 crc kubenswrapper[4705]: W0124 07:57:40.294856 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93151962_475c_412e_98d3_7363d8fd5f6c.slice/crio-c1c614db30d5b1725e422ab9fa48f9af287b89a535b1d98eba029f75c3cb9720 WatchSource:0}: Error finding container c1c614db30d5b1725e422ab9fa48f9af287b89a535b1d98eba029f75c3cb9720: Status 404 returned error can't find the container with id c1c614db30d5b1725e422ab9fa48f9af287b89a535b1d98eba029f75c3cb9720 Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.394984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.395097 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.395254 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.395340 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:41.395320018 +0000 UTC m=+1000.115193306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "metrics-server-cert" not found Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.395430 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.395496 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:41.395476872 +0000 UTC m=+1000.115350271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "webhook-server-cert" not found Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.409008 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.409430 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" event={"ID":"0a119afa-9520-46bc-8fde-0b2974035e48","Type":"ContainerStarted","Data":"4a6debacd8bca1cccd02baf1514c36064375c4e89770a52c87e2d8c38192c346"} Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.411345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" event={"ID":"91182c35-90b8-409a-ac96-191c754f5c9d","Type":"ContainerStarted","Data":"33d8bf85508e04350978f805faa595d23cfc80e9b734b212e5dced26fdd2304c"} Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.412850 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" event={"ID":"93151962-475c-412e-98d3-7363d8fd5f6c","Type":"ContainerStarted","Data":"c1c614db30d5b1725e422ab9fa48f9af287b89a535b1d98eba029f75c3cb9720"} Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.432356 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.468532 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.525244 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj75j"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.561193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x"] Jan 24 07:57:40 crc kubenswrapper[4705]: W0124 07:57:40.572688 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod241de282_17c7_48c1_b4cb_fbeb9b98bd08.slice/crio-0a376422b639350ecadeaba79ad8067369d1fb11beccfa3a0a5dd568334904b5 WatchSource:0}: Error finding container 0a376422b639350ecadeaba79ad8067369d1fb11beccfa3a0a5dd568334904b5: Status 404 returned error can't find the container with id 0a376422b639350ecadeaba79ad8067369d1fb11beccfa3a0a5dd568334904b5 Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.600323 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.643773 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.649914 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.660182 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.666112 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.675895 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.681639 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl"] Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.687128 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p78q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-869gl_openstack-operators(3c52d864-16a1-4eb6-80e9-ac7e5009bbd9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.688220 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" podUID="3c52d864-16a1-4eb6-80e9-ac7e5009bbd9" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.699535 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.699700 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.699862 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert podName:bef91cd6-2f77-474f-8258-e23ca5b37091 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:42.699744594 +0000 UTC m=+1001.419617882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert") pod "infra-operator-controller-manager-694cf4f878-l4fkg" (UID: "bef91cd6-2f77-474f-8258-e23ca5b37091") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.819903 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j"] Jan 24 07:57:40 crc kubenswrapper[4705]: W0124 07:57:40.827506 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e973d30_3868_4922_b576_12587d46810a.slice/crio-cdcfd725f036616eb107ae7a6b14e37771f5fa95a2607225319fbad683e3cf2d WatchSource:0}: Error finding container cdcfd725f036616eb107ae7a6b14e37771f5fa95a2607225319fbad683e3cf2d: Status 404 returned error can't find the container with id cdcfd725f036616eb107ae7a6b14e37771f5fa95a2607225319fbad683e3cf2d Jan 24 07:57:40 crc kubenswrapper[4705]: W0124 07:57:40.852572 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab0b790_0cf4_453a_9c4b_97a6ddf2bd36.slice/crio-14d137b726fb82c630f76feaecf6d8f074dfa732efb8d982f08b44a8c6dc29c6 WatchSource:0}: Error finding container 14d137b726fb82c630f76feaecf6d8f074dfa732efb8d982f08b44a8c6dc29c6: Status 404 returned error can't find the container with id 14d137b726fb82c630f76feaecf6d8f074dfa732efb8d982f08b44a8c6dc29c6 Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.852922 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl"] Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.864060 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw"] Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.869033 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggtq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-5nngs_openstack-operators(0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.869413 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs"] Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.870712 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" podUID="0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.879262 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57"] Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.880037 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9224,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-f2vcw_openstack-operators(404be92b-a12e-42d7-868f-adf825bc7c68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.880363 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z76d5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-c6x57_openstack-operators(a08c7b5c-356a-4a05-a600-82f6bf5aad91): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.882643 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" podUID="a08c7b5c-356a-4a05-a600-82f6bf5aad91" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.882700 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" podUID="404be92b-a12e-42d7-868f-adf825bc7c68" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.886715 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:0d12e8bd594d0d63ea8f79be126812ecc5577d02,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glgqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7c64596589-v9zxl_openstack-operators(f5382856-3a6e-4d10-beb2-9df688e2f6c7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:57:40 crc kubenswrapper[4705]: I0124 07:57:40.887283 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j"] Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.888663 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" podUID="f5382856-3a6e-4d10-beb2-9df688e2f6c7" Jan 24 07:57:40 crc kubenswrapper[4705]: W0124 07:57:40.892330 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb05abb5_cee5_4e0d_9217_6154aebe5836.slice/crio-db87525c216209e6c12d6dec8cb949293d2c381a3e7e3e673dca6c3f5f7d952e WatchSource:0}: Error finding container db87525c216209e6c12d6dec8cb949293d2c381a3e7e3e673dca6c3f5f7d952e: Status 404 returned error can't find the container with id db87525c216209e6c12d6dec8cb949293d2c381a3e7e3e673dca6c3f5f7d952e Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.896283 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jhpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-k8q6j_openstack-operators(eb05abb5-cee5-4e0d-9217-6154aebe5836): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:57:40 crc kubenswrapper[4705]: E0124 07:57:40.897472 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" podUID="eb05abb5-cee5-4e0d-9217-6154aebe5836" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.007043 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.007248 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.007294 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert podName:5a7f4747-1fd9-4aa3-b954-e32101ebe927 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:43.00728129 +0000 UTC m=+1001.727154578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" (UID: "5a7f4747-1fd9-4aa3-b954-e32101ebe927") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.017477 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9"] Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.413875 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.413949 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.414187 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.414250 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:43.414232958 +0000 UTC m=+1002.134106246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "webhook-server-cert" not found Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.414553 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.415358 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:43.41533126 +0000 UTC m=+1002.135204548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "metrics-server-cert" not found Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.434893 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" event={"ID":"fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e","Type":"ContainerStarted","Data":"e8e8ed9b33783d8ee6353ac37d25f17aa99dbb80ff81c99f207a03b453a3ac72"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.437903 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" event={"ID":"3c52d864-16a1-4eb6-80e9-ac7e5009bbd9","Type":"ContainerStarted","Data":"981e77c1c433eb9b6792daefb98c28f0659cedc13a74899569a310f9c9379a74"} Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.440413 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" podUID="3c52d864-16a1-4eb6-80e9-ac7e5009bbd9" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.443694 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" event={"ID":"a08c7b5c-356a-4a05-a600-82f6bf5aad91","Type":"ContainerStarted","Data":"ce1bd3d8410779f05096dc081aedcd0c0e6d29ea3acd292de30e47a45c59d96b"} Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.445770 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" podUID="a08c7b5c-356a-4a05-a600-82f6bf5aad91" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.446663 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" event={"ID":"3fac85c2-ff36-44a9-ae92-947df3332178","Type":"ContainerStarted","Data":"c4b5f4ad0342d7f9ba8848082395254b5c593483b1489a8c536b4dcc2e58ea3f"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.448594 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" event={"ID":"be549f5c-a477-4e7d-a928-0e9885ffa225","Type":"ContainerStarted","Data":"fdd3c55baba1dc9b35b68f3de0ad1fa60207e4a9ff25e438fda8b2640802f954"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.453040 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" event={"ID":"2e973d30-3868-4922-b576-12587d46810a","Type":"ContainerStarted","Data":"cdcfd725f036616eb107ae7a6b14e37771f5fa95a2607225319fbad683e3cf2d"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.459204 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" event={"ID":"bf85561a-7710-4a15-b4b1-c8f48e50dc53","Type":"ContainerStarted","Data":"60d8c05b49e25013850930271ce5aadbe5e7fae66216733934a76495acc35234"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.461641 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" event={"ID":"f5382856-3a6e-4d10-beb2-9df688e2f6c7","Type":"ContainerStarted","Data":"ef036f85e558e479451c99063bf69e6ab6553daeee2c3a8c82dce87c5c6d1c88"} Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.462905 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:0d12e8bd594d0d63ea8f79be126812ecc5577d02\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" podUID="f5382856-3a6e-4d10-beb2-9df688e2f6c7" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.463621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" event={"ID":"eb05abb5-cee5-4e0d-9217-6154aebe5836","Type":"ContainerStarted","Data":"db87525c216209e6c12d6dec8cb949293d2c381a3e7e3e673dca6c3f5f7d952e"} Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.464666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" podUID="eb05abb5-cee5-4e0d-9217-6154aebe5836" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.473605 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" event={"ID":"652fc521-e0f0-4d0c-8ca3-8077222ab892","Type":"ContainerStarted","Data":"bc974798fee7c9f71aae0e34df09661fdfd0d89b1464dcb0dff37b924ec86b4d"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.481753 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" event={"ID":"23f7495d-06eb-45e5-b5e6-e50169760b0b","Type":"ContainerStarted","Data":"3f2f24643c47e254193bb7e7c3b06cb1adaeed9dcff53af9c9b7e51a3552965a"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.493691 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" event={"ID":"241de282-17c7-48c1-b4cb-fbeb9b98bd08","Type":"ContainerStarted","Data":"0a376422b639350ecadeaba79ad8067369d1fb11beccfa3a0a5dd568334904b5"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.503845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" event={"ID":"404be92b-a12e-42d7-868f-adf825bc7c68","Type":"ContainerStarted","Data":"11b1639ec7cd2416d4fde72eb8c2b685ec2c04e794c68400c80e59370351eb0c"} Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.510385 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" podUID="404be92b-a12e-42d7-868f-adf825bc7c68" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.510958 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" event={"ID":"b14e84b5-9dcb-4280-9480-a6f34bf8c8dd","Type":"ContainerStarted","Data":"2ce89e26f61a64a6952b19049fb24adfed656206a386d03a8c65a8ddd887a4c4"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.515810 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" event={"ID":"0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36","Type":"ContainerStarted","Data":"14d137b726fb82c630f76feaecf6d8f074dfa732efb8d982f08b44a8c6dc29c6"} Jan 24 07:57:41 crc kubenswrapper[4705]: E0124 07:57:41.517789 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" podUID="0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36" Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.521349 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" event={"ID":"49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead","Type":"ContainerStarted","Data":"6636fcb994770c91bfeec75f1808acf3cdd3028556e1430d327df764a810684e"} Jan 24 07:57:41 crc kubenswrapper[4705]: I0124 07:57:41.544347 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" event={"ID":"338f4812-65cb-4a3e-a83e-73a72e4f31eb","Type":"ContainerStarted","Data":"56ae99f1acbb32abb005a6be89c8b0b5cbfacceb34d2b954d53203c76ede3ede"} Jan 24 07:57:42 crc kubenswrapper[4705]: I0124 07:57:42.572268 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nj75j" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="registry-server" containerID="cri-o://d674f6dc3648e78b0a6ead9b80b125964ce74379bc30431c1aecd3843751dda7" gracePeriod=2 Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.580164 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:0d12e8bd594d0d63ea8f79be126812ecc5577d02\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" podUID="f5382856-3a6e-4d10-beb2-9df688e2f6c7" Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.580596 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" podUID="a08c7b5c-356a-4a05-a600-82f6bf5aad91" Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.580650 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" podUID="eb05abb5-cee5-4e0d-9217-6154aebe5836" Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.580705 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" podUID="3c52d864-16a1-4eb6-80e9-ac7e5009bbd9" Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.580780 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" podUID="404be92b-a12e-42d7-868f-adf825bc7c68" Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.592938 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" podUID="0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36" Jan 24 07:57:42 crc kubenswrapper[4705]: I0124 07:57:42.742318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.742556 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:42 crc kubenswrapper[4705]: E0124 07:57:42.742617 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert podName:bef91cd6-2f77-474f-8258-e23ca5b37091 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:46.74259704 +0000 UTC m=+1005.462470328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert") pod "infra-operator-controller-manager-694cf4f878-l4fkg" (UID: "bef91cd6-2f77-474f-8258-e23ca5b37091") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:43 crc kubenswrapper[4705]: I0124 07:57:43.045984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:43 crc kubenswrapper[4705]: E0124 07:57:43.046232 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:43 crc kubenswrapper[4705]: E0124 07:57:43.046457 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert podName:5a7f4747-1fd9-4aa3-b954-e32101ebe927 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:47.046432789 +0000 UTC m=+1005.766306087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" (UID: "5a7f4747-1fd9-4aa3-b954-e32101ebe927") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:43 crc kubenswrapper[4705]: I0124 07:57:43.459259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:43 crc kubenswrapper[4705]: I0124 07:57:43.459322 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:43 crc kubenswrapper[4705]: E0124 07:57:43.459488 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:57:43 crc kubenswrapper[4705]: E0124 07:57:43.459533 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:47.459519015 +0000 UTC m=+1006.179392303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "webhook-server-cert" not found Jan 24 07:57:43 crc kubenswrapper[4705]: E0124 07:57:43.459859 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:57:43 crc kubenswrapper[4705]: E0124 07:57:43.459886 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:47.459878415 +0000 UTC m=+1006.179751703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "metrics-server-cert" not found Jan 24 07:57:43 crc kubenswrapper[4705]: I0124 07:57:43.607312 4705 generic.go:334] "Generic (PLEG): container finished" podID="30708e5f-0ff8-489b-b77e-226517ce5645" containerID="d674f6dc3648e78b0a6ead9b80b125964ce74379bc30431c1aecd3843751dda7" exitCode=0 Jan 24 07:57:43 crc kubenswrapper[4705]: I0124 07:57:43.608103 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerDied","Data":"d674f6dc3648e78b0a6ead9b80b125964ce74379bc30431c1aecd3843751dda7"} Jan 24 07:57:45 crc kubenswrapper[4705]: I0124 07:57:45.874326 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:45 crc kubenswrapper[4705]: I0124 07:57:45.929028 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh9fv\" (UniqueName: \"kubernetes.io/projected/30708e5f-0ff8-489b-b77e-226517ce5645-kube-api-access-fh9fv\") pod \"30708e5f-0ff8-489b-b77e-226517ce5645\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " Jan 24 07:57:45 crc kubenswrapper[4705]: I0124 07:57:45.929127 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-catalog-content\") pod \"30708e5f-0ff8-489b-b77e-226517ce5645\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " Jan 24 07:57:45 crc kubenswrapper[4705]: I0124 07:57:45.929174 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-utilities\") pod \"30708e5f-0ff8-489b-b77e-226517ce5645\" (UID: \"30708e5f-0ff8-489b-b77e-226517ce5645\") " Jan 24 07:57:45 crc kubenswrapper[4705]: I0124 07:57:45.930301 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-utilities" (OuterVolumeSpecName: "utilities") pod "30708e5f-0ff8-489b-b77e-226517ce5645" (UID: "30708e5f-0ff8-489b-b77e-226517ce5645"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:57:45 crc kubenswrapper[4705]: I0124 07:57:45.950713 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30708e5f-0ff8-489b-b77e-226517ce5645" (UID: "30708e5f-0ff8-489b-b77e-226517ce5645"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:57:45 crc kubenswrapper[4705]: I0124 07:57:45.951481 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30708e5f-0ff8-489b-b77e-226517ce5645-kube-api-access-fh9fv" (OuterVolumeSpecName: "kube-api-access-fh9fv") pod "30708e5f-0ff8-489b-b77e-226517ce5645" (UID: "30708e5f-0ff8-489b-b77e-226517ce5645"). InnerVolumeSpecName "kube-api-access-fh9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.031051 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh9fv\" (UniqueName: \"kubernetes.io/projected/30708e5f-0ff8-489b-b77e-226517ce5645-kube-api-access-fh9fv\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.031088 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.031101 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30708e5f-0ff8-489b-b77e-226517ce5645-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.627437 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj75j" event={"ID":"30708e5f-0ff8-489b-b77e-226517ce5645","Type":"ContainerDied","Data":"64139f95c9d54d4ef9d106993f4a2c8f6fc1cb2bc638bed317cc8dd78c4be199"} Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.627490 4705 scope.go:117] "RemoveContainer" containerID="d674f6dc3648e78b0a6ead9b80b125964ce74379bc30431c1aecd3843751dda7" Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.627515 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj75j" Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.666277 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj75j"] Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.672244 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj75j"] Jan 24 07:57:46 crc kubenswrapper[4705]: I0124 07:57:46.745786 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:46 crc kubenswrapper[4705]: E0124 07:57:46.746015 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:46 crc kubenswrapper[4705]: E0124 07:57:46.746116 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert podName:bef91cd6-2f77-474f-8258-e23ca5b37091 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:54.746087775 +0000 UTC m=+1013.465961113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert") pod "infra-operator-controller-manager-694cf4f878-l4fkg" (UID: "bef91cd6-2f77-474f-8258-e23ca5b37091") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:47 crc kubenswrapper[4705]: I0124 07:57:47.056892 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:47 crc kubenswrapper[4705]: E0124 07:57:47.057055 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:47 crc kubenswrapper[4705]: E0124 07:57:47.057148 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert podName:5a7f4747-1fd9-4aa3-b954-e32101ebe927 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:55.057123302 +0000 UTC m=+1013.776996640 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" (UID: "5a7f4747-1fd9-4aa3-b954-e32101ebe927") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:47 crc kubenswrapper[4705]: I0124 07:57:47.462942 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:47 crc kubenswrapper[4705]: I0124 07:57:47.463116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:47 crc kubenswrapper[4705]: E0124 07:57:47.463271 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:57:47 crc kubenswrapper[4705]: E0124 07:57:47.463324 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:55.463308449 +0000 UTC m=+1014.183181737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "metrics-server-cert" not found Jan 24 07:57:47 crc kubenswrapper[4705]: E0124 07:57:47.463682 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:57:47 crc kubenswrapper[4705]: E0124 07:57:47.463721 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:57:55.46371064 +0000 UTC m=+1014.183583928 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "webhook-server-cert" not found Jan 24 07:57:47 crc kubenswrapper[4705]: I0124 07:57:47.586194 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" path="/var/lib/kubelet/pods/30708e5f-0ff8-489b-b77e-226517ce5645/volumes" Jan 24 07:57:54 crc kubenswrapper[4705]: I0124 07:57:54.782600 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:57:54 crc kubenswrapper[4705]: E0124 07:57:54.782795 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:54 crc kubenswrapper[4705]: E0124 07:57:54.783289 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert podName:bef91cd6-2f77-474f-8258-e23ca5b37091 nodeName:}" failed. No retries permitted until 2026-01-24 07:58:10.783265614 +0000 UTC m=+1029.503138902 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert") pod "infra-operator-controller-manager-694cf4f878-l4fkg" (UID: "bef91cd6-2f77-474f-8258-e23ca5b37091") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:57:55 crc kubenswrapper[4705]: I0124 07:57:55.087728 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:57:55 crc kubenswrapper[4705]: E0124 07:57:55.087900 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:55 crc kubenswrapper[4705]: E0124 07:57:55.087952 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert podName:5a7f4747-1fd9-4aa3-b954-e32101ebe927 nodeName:}" failed. No retries permitted until 2026-01-24 07:58:11.087938177 +0000 UTC m=+1029.807811465 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" (UID: "5a7f4747-1fd9-4aa3-b954-e32101ebe927") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:57:55 crc kubenswrapper[4705]: I0124 07:57:55.492808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:55 crc kubenswrapper[4705]: I0124 07:57:55.492967 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:57:55 crc kubenswrapper[4705]: E0124 07:57:55.493090 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:57:55 crc kubenswrapper[4705]: E0124 07:57:55.493143 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:58:11.493128944 +0000 UTC m=+1030.213002232 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "metrics-server-cert" not found Jan 24 07:57:55 crc kubenswrapper[4705]: E0124 07:57:55.493152 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:57:55 crc kubenswrapper[4705]: E0124 07:57:55.493231 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs podName:51813a09-552b-4f12-904a-840cf6829c80 nodeName:}" failed. No retries permitted until 2026-01-24 07:58:11.493212776 +0000 UTC m=+1030.213086164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs") pod "openstack-operator-controller-manager-8d6967975-rkwgg" (UID: "51813a09-552b-4f12-904a-840cf6829c80") : secret "webhook-server-cert" not found Jan 24 07:57:59 crc kubenswrapper[4705]: E0124 07:57:59.949244 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 24 07:57:59 crc kubenswrapper[4705]: E0124 07:57:59.949885 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pf5qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-6445j_openstack-operators(2e973d30-3868-4922-b576-12587d46810a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:57:59 crc kubenswrapper[4705]: E0124 07:57:59.951082 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" podUID="2e973d30-3868-4922-b576-12587d46810a" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.335231 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sf2dc"] Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.335637 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="registry-server" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.335658 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="registry-server" Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.335674 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="extract-content" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.335681 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="extract-content" Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.335703 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="extract-utilities" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.335713 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="extract-utilities" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.335961 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="30708e5f-0ff8-489b-b77e-226517ce5645" containerName="registry-server" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.340207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.345058 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sf2dc"] Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.464854 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-catalog-content\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.464945 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-utilities\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.465282 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5j6\" (UniqueName: \"kubernetes.io/projected/f9be272e-3f95-4d61-9846-8a8b6458c4e6-kube-api-access-bb5j6\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.566765 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-catalog-content\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.566939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-utilities\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.567051 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5j6\" (UniqueName: \"kubernetes.io/projected/f9be272e-3f95-4d61-9846-8a8b6458c4e6-kube-api-access-bb5j6\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.567261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-catalog-content\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.567643 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-utilities\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.594940 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5j6\" (UniqueName: \"kubernetes.io/projected/f9be272e-3f95-4d61-9846-8a8b6458c4e6-kube-api-access-bb5j6\") pod \"certified-operators-sf2dc\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: I0124 07:58:00.673023 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.704147 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.704376 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcxqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-mpgjf_openstack-operators(bf85561a-7710-4a15-b4b1-c8f48e50dc53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.705644 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" podUID="bf85561a-7710-4a15-b4b1-c8f48e50dc53" Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.735278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" podUID="bf85561a-7710-4a15-b4b1-c8f48e50dc53" Jan 24 07:58:00 crc kubenswrapper[4705]: E0124 07:58:00.735495 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" podUID="2e973d30-3868-4922-b576-12587d46810a" Jan 24 07:58:05 crc kubenswrapper[4705]: E0124 07:58:05.289922 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 24 07:58:05 crc kubenswrapper[4705]: E0124 07:58:05.290407 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4q9ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-s86rp_openstack-operators(fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:58:05 crc kubenswrapper[4705]: E0124 07:58:05.291629 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" podUID="fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e" Jan 24 07:58:05 crc kubenswrapper[4705]: E0124 07:58:05.902403 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" podUID="fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e" Jan 24 07:58:06 crc kubenswrapper[4705]: E0124 07:58:06.681047 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 24 07:58:06 crc kubenswrapper[4705]: E0124 07:58:06.681281 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4qhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-x5h78_openstack-operators(49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:58:06 crc kubenswrapper[4705]: E0124 07:58:06.682445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" podUID="49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead" Jan 24 07:58:06 crc kubenswrapper[4705]: E0124 07:58:06.907762 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" podUID="49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead" Jan 24 07:58:09 crc kubenswrapper[4705]: E0124 07:58:09.263675 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 24 07:58:09 crc kubenswrapper[4705]: E0124 07:58:09.264254 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vkh26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-v629x_openstack-operators(241de282-17c7-48c1-b4cb-fbeb9b98bd08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:58:09 crc kubenswrapper[4705]: E0124 07:58:09.265456 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" podUID="241de282-17c7-48c1-b4cb-fbeb9b98bd08" Jan 24 07:58:09 crc kubenswrapper[4705]: E0124 07:58:09.927455 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" podUID="241de282-17c7-48c1-b4cb-fbeb9b98bd08" Jan 24 07:58:09 crc kubenswrapper[4705]: E0124 07:58:09.928334 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 24 07:58:09 crc kubenswrapper[4705]: E0124 07:58:09.928661 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gbc2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-tgzdf_openstack-operators(23f7495d-06eb-45e5-b5e6-e50169760b0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:58:09 crc kubenswrapper[4705]: E0124 07:58:09.929864 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" podUID="23f7495d-06eb-45e5-b5e6-e50169760b0b" Jan 24 07:58:10 crc kubenswrapper[4705]: E0124 07:58:10.346351 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 24 07:58:10 crc kubenswrapper[4705]: E0124 07:58:10.346540 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8fmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-d4kz9_openstack-operators(3fac85c2-ff36-44a9-ae92-947df3332178): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:58:10 crc kubenswrapper[4705]: E0124 07:58:10.347782 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" podUID="3fac85c2-ff36-44a9-ae92-947df3332178" Jan 24 07:58:10 crc kubenswrapper[4705]: I0124 07:58:10.864232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:58:10 crc kubenswrapper[4705]: I0124 07:58:10.869778 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bef91cd6-2f77-474f-8258-e23ca5b37091-cert\") pod \"infra-operator-controller-manager-694cf4f878-l4fkg\" (UID: \"bef91cd6-2f77-474f-8258-e23ca5b37091\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:58:10 crc kubenswrapper[4705]: E0124 07:58:10.933410 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" podUID="23f7495d-06eb-45e5-b5e6-e50169760b0b" Jan 24 07:58:10 crc kubenswrapper[4705]: E0124 07:58:10.933434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" podUID="3fac85c2-ff36-44a9-ae92-947df3332178" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.024975 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cfgtv" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.033391 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.168445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.184955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7f4747-1fd9-4aa3-b954-e32101ebe927-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz\" (UID: \"5a7f4747-1fd9-4aa3-b954-e32101ebe927\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.208375 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-82gr2" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.217726 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.574455 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.574606 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.578440 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-webhook-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.589442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51813a09-552b-4f12-904a-840cf6829c80-metrics-certs\") pod \"openstack-operator-controller-manager-8d6967975-rkwgg\" (UID: \"51813a09-552b-4f12-904a-840cf6829c80\") " pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.657388 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bpdlx" Jan 24 07:58:11 crc kubenswrapper[4705]: I0124 07:58:11.665989 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:58:11 crc kubenswrapper[4705]: E0124 07:58:11.862550 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 24 07:58:11 crc kubenswrapper[4705]: E0124 07:58:11.862779 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dk4pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-drzkh_openstack-operators(b14e84b5-9dcb-4280-9480-a6f34bf8c8dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:58:11 crc kubenswrapper[4705]: E0124 07:58:11.864213 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" podUID="b14e84b5-9dcb-4280-9480-a6f34bf8c8dd" Jan 24 07:58:11 crc kubenswrapper[4705]: E0124 07:58:11.937969 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" podUID="b14e84b5-9dcb-4280-9480-a6f34bf8c8dd" Jan 24 07:58:14 crc kubenswrapper[4705]: I0124 07:58:14.656470 4705 scope.go:117] "RemoveContainer" containerID="5bbc7c21e6b4c8a96218a65ece8f349dab2118b953907766aa10788d438e27cb" Jan 24 07:58:15 crc kubenswrapper[4705]: I0124 07:58:15.787696 4705 scope.go:117] "RemoveContainer" containerID="6a74280053814f799637d0395e3ac29420200d09a21c57d8cd2e9dc7281b6608" Jan 24 07:58:16 crc kubenswrapper[4705]: I0124 07:58:16.350469 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sf2dc"] Jan 24 07:58:16 crc kubenswrapper[4705]: I0124 07:58:16.380219 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg"] Jan 24 07:58:16 crc kubenswrapper[4705]: I0124 07:58:16.390652 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg"] Jan 24 07:58:16 crc kubenswrapper[4705]: W0124 07:58:16.403894 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51813a09_552b_4f12_904a_840cf6829c80.slice/crio-de6144ff16fb2e7e7d61cecd28330edbcb7c3d5791769ae4b13d2e27424ab8ed WatchSource:0}: Error finding container de6144ff16fb2e7e7d61cecd28330edbcb7c3d5791769ae4b13d2e27424ab8ed: Status 404 returned error can't find the container with id de6144ff16fb2e7e7d61cecd28330edbcb7c3d5791769ae4b13d2e27424ab8ed Jan 24 07:58:16 crc kubenswrapper[4705]: I0124 07:58:16.418881 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz"] Jan 24 07:58:16 crc kubenswrapper[4705]: W0124 07:58:16.433481 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9be272e_3f95_4d61_9846_8a8b6458c4e6.slice/crio-b92d7a84f91c4840cc1843579829531dbd11f750243dda951699551d0af8dce7 WatchSource:0}: Error finding container b92d7a84f91c4840cc1843579829531dbd11f750243dda951699551d0af8dce7: Status 404 returned error can't find the container with id b92d7a84f91c4840cc1843579829531dbd11f750243dda951699551d0af8dce7 Jan 24 07:58:16 crc kubenswrapper[4705]: W0124 07:58:16.444211 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbef91cd6_2f77_474f_8258_e23ca5b37091.slice/crio-6fffc18f7fdc9c6363a93eede737a39047173b36fe692c5060cc45a4ddf89fe7 WatchSource:0}: Error finding container 6fffc18f7fdc9c6363a93eede737a39047173b36fe692c5060cc45a4ddf89fe7: Status 404 returned error can't find the container with id 6fffc18f7fdc9c6363a93eede737a39047173b36fe692c5060cc45a4ddf89fe7 Jan 24 07:58:16 crc kubenswrapper[4705]: W0124 07:58:16.484456 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a7f4747_1fd9_4aa3_b954_e32101ebe927.slice/crio-8844524b171c4399db98d0b4ca96b4fa1650052444d9636d2fc837044eec49e3 WatchSource:0}: Error finding container 8844524b171c4399db98d0b4ca96b4fa1650052444d9636d2fc837044eec49e3: Status 404 returned error can't find the container with id 8844524b171c4399db98d0b4ca96b4fa1650052444d9636d2fc837044eec49e3 Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.105493 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" event={"ID":"652fc521-e0f0-4d0c-8ca3-8077222ab892","Type":"ContainerStarted","Data":"1b3e703ba67414352bb691e47cf4f25e44d7994e67217ac68fb2bf5b09199b4e"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.106939 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.108076 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sf2dc" event={"ID":"f9be272e-3f95-4d61-9846-8a8b6458c4e6","Type":"ContainerStarted","Data":"b92d7a84f91c4840cc1843579829531dbd11f750243dda951699551d0af8dce7"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.109189 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" event={"ID":"91182c35-90b8-409a-ac96-191c754f5c9d","Type":"ContainerStarted","Data":"408b246289f873ad64eed42d1e65e0d90b112a30593aace7eeaa2af37e67a599"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.109685 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.110918 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" event={"ID":"bf85561a-7710-4a15-b4b1-c8f48e50dc53","Type":"ContainerStarted","Data":"deac7665cfbfad17f8c0603d43ed9797e4539f1332acd6914986527b59614772"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.111352 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.112600 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" event={"ID":"338f4812-65cb-4a3e-a83e-73a72e4f31eb","Type":"ContainerStarted","Data":"a1ab0a1708f9aa9eff257268ba043b4f9fea56c32cf118806477fddb61cf18c5"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.113013 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.123630 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" event={"ID":"a08c7b5c-356a-4a05-a600-82f6bf5aad91","Type":"ContainerStarted","Data":"5d40069f546559aa175226bf8f35dff37a6ae6b8720df7b098656b53765b713d"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.124401 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.127567 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" event={"ID":"eb05abb5-cee5-4e0d-9217-6154aebe5836","Type":"ContainerStarted","Data":"51b3357952e467d4012266c5b6f1c16221d1316816bb4d7b2ec43452a009b5c2"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.128024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.136491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" event={"ID":"404be92b-a12e-42d7-868f-adf825bc7c68","Type":"ContainerStarted","Data":"a093c2ad10934f0850fbcdb4e103599fefbf9189377f09d3c94adc205cb6bc1f"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.136759 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.146488 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" event={"ID":"0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36","Type":"ContainerStarted","Data":"2e90820f486443014b6c02b0ae8a7a99028da1477beaa10c13837478a2a74484"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.147019 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.148035 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" event={"ID":"bef91cd6-2f77-474f-8258-e23ca5b37091","Type":"ContainerStarted","Data":"6fffc18f7fdc9c6363a93eede737a39047173b36fe692c5060cc45a4ddf89fe7"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.148984 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" event={"ID":"3c52d864-16a1-4eb6-80e9-ac7e5009bbd9","Type":"ContainerStarted","Data":"f2af17b38d2d33fef9e41562c446020d523f845c3699eed92f8c7965a1767374"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.149512 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.167570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" event={"ID":"0a119afa-9520-46bc-8fde-0b2974035e48","Type":"ContainerStarted","Data":"8aa486ec55e2b985aa4ae9ef5ac4f2c87126227e3f47dbdafd2edc8d80078720"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.168290 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.170321 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" podStartSLOduration=8.268804881 podStartE2EDuration="39.17029384s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.42594627 +0000 UTC m=+999.145819568" lastFinishedPulling="2026-01-24 07:58:11.327435239 +0000 UTC m=+1030.047308527" observedRunningTime="2026-01-24 07:58:17.16680879 +0000 UTC m=+1035.886682098" watchObservedRunningTime="2026-01-24 07:58:17.17029384 +0000 UTC m=+1035.890167128" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.184227 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" event={"ID":"93151962-475c-412e-98d3-7363d8fd5f6c","Type":"ContainerStarted","Data":"c1180a5de3bb8f62b2a6ac40db22a220e9819b5869464db82ce2ab244835c9ec"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.184881 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.190129 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" event={"ID":"51813a09-552b-4f12-904a-840cf6829c80","Type":"ContainerStarted","Data":"de6144ff16fb2e7e7d61cecd28330edbcb7c3d5791769ae4b13d2e27424ab8ed"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.203542 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" podStartSLOduration=3.326188209 podStartE2EDuration="38.203521687s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.868871915 +0000 UTC m=+999.588745203" lastFinishedPulling="2026-01-24 07:58:15.746205393 +0000 UTC m=+1034.466078681" observedRunningTime="2026-01-24 07:58:17.201775177 +0000 UTC m=+1035.921648465" watchObservedRunningTime="2026-01-24 07:58:17.203521687 +0000 UTC m=+1035.923394985" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.204420 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" event={"ID":"5a7f4747-1fd9-4aa3-b954-e32101ebe927","Type":"ContainerStarted","Data":"8844524b171c4399db98d0b4ca96b4fa1650052444d9636d2fc837044eec49e3"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.218122 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" event={"ID":"be549f5c-a477-4e7d-a928-0e9885ffa225","Type":"ContainerStarted","Data":"26ebfd7325ebc892e5278a929b299503702861a8da656c029be4aa9725d9f28e"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.219935 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.294723 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" event={"ID":"2e973d30-3868-4922-b576-12587d46810a","Type":"ContainerStarted","Data":"51f69213c24fd7f1fc026bcdf529f6c6185765e46e91b40044c38ee3e6374c8d"} Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.295299 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.438505 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" podStartSLOduration=3.366779598 podStartE2EDuration="38.438486843s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.686803072 +0000 UTC m=+999.406676360" lastFinishedPulling="2026-01-24 07:58:15.758510317 +0000 UTC m=+1034.478383605" observedRunningTime="2026-01-24 07:58:17.438005689 +0000 UTC m=+1036.157878987" watchObservedRunningTime="2026-01-24 07:58:17.438486843 +0000 UTC m=+1036.158360121" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.440506 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" podStartSLOduration=3.561728291 podStartE2EDuration="38.440492421s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.879876971 +0000 UTC m=+999.599750269" lastFinishedPulling="2026-01-24 07:58:15.758641101 +0000 UTC m=+1034.478514399" observedRunningTime="2026-01-24 07:58:17.346078182 +0000 UTC m=+1036.065951470" watchObservedRunningTime="2026-01-24 07:58:17.440492421 +0000 UTC m=+1036.160365709" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.713584 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" podStartSLOduration=4.507014811 podStartE2EDuration="39.71356922s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.686514063 +0000 UTC m=+999.406387351" lastFinishedPulling="2026-01-24 07:58:15.893068472 +0000 UTC m=+1034.612941760" observedRunningTime="2026-01-24 07:58:17.518515376 +0000 UTC m=+1036.238388664" watchObservedRunningTime="2026-01-24 07:58:17.71356922 +0000 UTC m=+1036.433442508" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.717371 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" podStartSLOduration=9.074398203 podStartE2EDuration="39.717352807s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.686771691 +0000 UTC m=+999.406644979" lastFinishedPulling="2026-01-24 07:58:11.329726295 +0000 UTC m=+1030.049599583" observedRunningTime="2026-01-24 07:58:17.712462999 +0000 UTC m=+1036.432336287" watchObservedRunningTime="2026-01-24 07:58:17.717352807 +0000 UTC m=+1036.437226085" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.774961 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" podStartSLOduration=8.730212415 podStartE2EDuration="39.774940019s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.282710265 +0000 UTC m=+999.002583553" lastFinishedPulling="2026-01-24 07:58:11.327437849 +0000 UTC m=+1030.047311157" observedRunningTime="2026-01-24 07:58:17.774450385 +0000 UTC m=+1036.494323673" watchObservedRunningTime="2026-01-24 07:58:17.774940019 +0000 UTC m=+1036.494813307" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.869726 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" podStartSLOduration=4.007356041 podStartE2EDuration="38.869706878s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.89616135 +0000 UTC m=+999.616034638" lastFinishedPulling="2026-01-24 07:58:15.758512187 +0000 UTC m=+1034.478385475" observedRunningTime="2026-01-24 07:58:17.864666456 +0000 UTC m=+1036.584539734" watchObservedRunningTime="2026-01-24 07:58:17.869706878 +0000 UTC m=+1036.589580166" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.919108 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" podStartSLOduration=4.054283261 podStartE2EDuration="38.919092399s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.880256132 +0000 UTC m=+999.600129420" lastFinishedPulling="2026-01-24 07:58:15.74506527 +0000 UTC m=+1034.464938558" observedRunningTime="2026-01-24 07:58:17.915052666 +0000 UTC m=+1036.634925954" watchObservedRunningTime="2026-01-24 07:58:17.919092399 +0000 UTC m=+1036.638965687" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.996478 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" podStartSLOduration=8.848157673 podStartE2EDuration="39.996461099s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.180975806 +0000 UTC m=+998.900849094" lastFinishedPulling="2026-01-24 07:58:11.329279232 +0000 UTC m=+1030.049152520" observedRunningTime="2026-01-24 07:58:17.933714601 +0000 UTC m=+1036.653587889" watchObservedRunningTime="2026-01-24 07:58:17.996461099 +0000 UTC m=+1036.716334377" Jan 24 07:58:17 crc kubenswrapper[4705]: I0124 07:58:17.997395 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" podStartSLOduration=8.964113221 podStartE2EDuration="39.997388515s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.296456631 +0000 UTC m=+999.016329919" lastFinishedPulling="2026-01-24 07:58:11.329731905 +0000 UTC m=+1030.049605213" observedRunningTime="2026-01-24 07:58:17.995687347 +0000 UTC m=+1036.715560635" watchObservedRunningTime="2026-01-24 07:58:17.997388515 +0000 UTC m=+1036.717261803" Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.018733 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" podStartSLOduration=3.953527119 podStartE2EDuration="39.018718686s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.832863468 +0000 UTC m=+999.552736756" lastFinishedPulling="2026-01-24 07:58:15.898055025 +0000 UTC m=+1034.617928323" observedRunningTime="2026-01-24 07:58:18.014467686 +0000 UTC m=+1036.734340974" watchObservedRunningTime="2026-01-24 07:58:18.018718686 +0000 UTC m=+1036.738591974" Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.377740 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerID="ae0ffec1d153174ab7be78dfa5993f68cd4e56f784a9eaf7cf14761dfab148fb" exitCode=0 Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.377800 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sf2dc" event={"ID":"f9be272e-3f95-4d61-9846-8a8b6458c4e6","Type":"ContainerDied","Data":"ae0ffec1d153174ab7be78dfa5993f68cd4e56f784a9eaf7cf14761dfab148fb"} Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.380295 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" event={"ID":"f5382856-3a6e-4d10-beb2-9df688e2f6c7","Type":"ContainerStarted","Data":"24540a9819b6667393749b06dfe319cca994d8e47442813f916846aeefa0171c"} Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.380764 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.387225 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" event={"ID":"51813a09-552b-4f12-904a-840cf6829c80","Type":"ContainerStarted","Data":"2f7e861b4ac31ce122fb2f0d2046ed8f7090c942d1d5ccf7806e99d304c98ebd"} Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.387265 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.407970 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" podStartSLOduration=9.521675208 podStartE2EDuration="40.407947949s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.44121248 +0000 UTC m=+999.161085768" lastFinishedPulling="2026-01-24 07:58:11.327485211 +0000 UTC m=+1030.047358509" observedRunningTime="2026-01-24 07:58:18.095955871 +0000 UTC m=+1036.815829149" watchObservedRunningTime="2026-01-24 07:58:18.407947949 +0000 UTC m=+1037.127821237" Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.470098 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" podStartSLOduration=39.470076939 podStartE2EDuration="39.470076939s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:58:18.446138835 +0000 UTC m=+1037.166012123" watchObservedRunningTime="2026-01-24 07:58:18.470076939 +0000 UTC m=+1037.189950227" Jan 24 07:58:18 crc kubenswrapper[4705]: I0124 07:58:18.471764 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" podStartSLOduration=4.29720036 podStartE2EDuration="39.471755657s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.886606355 +0000 UTC m=+999.606479643" lastFinishedPulling="2026-01-24 07:58:16.061161652 +0000 UTC m=+1034.781034940" observedRunningTime="2026-01-24 07:58:18.469212265 +0000 UTC m=+1037.189085553" watchObservedRunningTime="2026-01-24 07:58:18.471755657 +0000 UTC m=+1037.191628945" Jan 24 07:58:23 crc kubenswrapper[4705]: I0124 07:58:23.830085 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" event={"ID":"23f7495d-06eb-45e5-b5e6-e50169760b0b","Type":"ContainerStarted","Data":"0d19ab1551a1d7acc861f8b1873de6c9165c379a9289c604e8355921df492937"} Jan 24 07:58:23 crc kubenswrapper[4705]: I0124 07:58:23.832159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" event={"ID":"fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e","Type":"ContainerStarted","Data":"97907c2c4ae4bbca3c3f6cbb0851f4aa99590493befce150d71ddba67be1a085"} Jan 24 07:58:23 crc kubenswrapper[4705]: I0124 07:58:23.834032 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" event={"ID":"bef91cd6-2f77-474f-8258-e23ca5b37091","Type":"ContainerStarted","Data":"a3ce14bc14d9fb03dda1df1790a28fa006adc23fb4f3efc7257db19adadadac9"} Jan 24 07:58:23 crc kubenswrapper[4705]: I0124 07:58:23.834303 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:58:23 crc kubenswrapper[4705]: I0124 07:58:23.838981 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sf2dc" event={"ID":"f9be272e-3f95-4d61-9846-8a8b6458c4e6","Type":"ContainerStarted","Data":"a94f924dc7d1268eed7ee90eb634907ce636c0fd2e5977f684a492ceb189bbf8"} Jan 24 07:58:23 crc kubenswrapper[4705]: I0124 07:58:23.865999 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" podStartSLOduration=39.085328433 podStartE2EDuration="45.865978961s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:58:16.447779395 +0000 UTC m=+1035.167652683" lastFinishedPulling="2026-01-24 07:58:23.228429893 +0000 UTC m=+1041.948303211" observedRunningTime="2026-01-24 07:58:23.862122472 +0000 UTC m=+1042.581995760" watchObservedRunningTime="2026-01-24 07:58:23.865978961 +0000 UTC m=+1042.585852249" Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.922555 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerID="a94f924dc7d1268eed7ee90eb634907ce636c0fd2e5977f684a492ceb189bbf8" exitCode=0 Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.922640 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sf2dc" event={"ID":"f9be272e-3f95-4d61-9846-8a8b6458c4e6","Type":"ContainerDied","Data":"a94f924dc7d1268eed7ee90eb634907ce636c0fd2e5977f684a492ceb189bbf8"} Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.924325 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" event={"ID":"241de282-17c7-48c1-b4cb-fbeb9b98bd08","Type":"ContainerStarted","Data":"f1406bf1458e361853ebb35f326435a050f584b15074e5757aed2072091804ec"} Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.924548 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.925993 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" event={"ID":"5a7f4747-1fd9-4aa3-b954-e32101ebe927","Type":"ContainerStarted","Data":"d5fc5079e9e6707e8284b29dd6fadea200dc5ca0c5a0cb55a471b88d7cb722c6"} Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.926511 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.928369 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" event={"ID":"3fac85c2-ff36-44a9-ae92-947df3332178","Type":"ContainerStarted","Data":"a0acde6d710973f18895e946d24f4e6f3c2699f7010c4e83ab023ddf1a540f0c"} Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.930742 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" event={"ID":"49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead","Type":"ContainerStarted","Data":"f19d9d2d7bee0aba873771f7ffcabed5117148027f9be832572fd49fe80e2a02"} Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.930982 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.931040 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.931052 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.970248 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" podStartSLOduration=40.259208872 podStartE2EDuration="46.970228263s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:58:16.487629183 +0000 UTC m=+1035.207502461" lastFinishedPulling="2026-01-24 07:58:23.198648564 +0000 UTC m=+1041.918521852" observedRunningTime="2026-01-24 07:58:25.967910988 +0000 UTC m=+1044.687784276" watchObservedRunningTime="2026-01-24 07:58:25.970228263 +0000 UTC m=+1044.690101551" Jan 24 07:58:25 crc kubenswrapper[4705]: I0124 07:58:25.992139 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" podStartSLOduration=5.434553139 podStartE2EDuration="47.99212011s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.651621819 +0000 UTC m=+999.371495107" lastFinishedPulling="2026-01-24 07:58:23.20918879 +0000 UTC m=+1041.929062078" observedRunningTime="2026-01-24 07:58:25.984643729 +0000 UTC m=+1044.704517017" watchObservedRunningTime="2026-01-24 07:58:25.99212011 +0000 UTC m=+1044.711993398" Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.008224 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" podStartSLOduration=5.472659945 podStartE2EDuration="48.008203833s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.672573912 +0000 UTC m=+999.392447200" lastFinishedPulling="2026-01-24 07:58:23.2081178 +0000 UTC m=+1041.927991088" observedRunningTime="2026-01-24 07:58:26.001216686 +0000 UTC m=+1044.721089994" watchObservedRunningTime="2026-01-24 07:58:26.008203833 +0000 UTC m=+1044.728077121" Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.021659 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" podStartSLOduration=5.387511184 podStartE2EDuration="48.021644671s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.574962191 +0000 UTC m=+999.294835479" lastFinishedPulling="2026-01-24 07:58:23.209095638 +0000 UTC m=+1041.928968966" observedRunningTime="2026-01-24 07:58:26.016689512 +0000 UTC m=+1044.736562800" watchObservedRunningTime="2026-01-24 07:58:26.021644671 +0000 UTC m=+1044.741517959" Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.049674 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d4kz9" podStartSLOduration=4.714586672 podStartE2EDuration="47.04965357s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:41.028981155 +0000 UTC m=+999.748854443" lastFinishedPulling="2026-01-24 07:58:23.364048053 +0000 UTC m=+1042.083921341" observedRunningTime="2026-01-24 07:58:26.037049495 +0000 UTC m=+1044.756922793" watchObservedRunningTime="2026-01-24 07:58:26.04965357 +0000 UTC m=+1044.769526858" Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.059518 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" podStartSLOduration=5.469441125 podStartE2EDuration="48.059500408s" podCreationTimestamp="2026-01-24 07:57:38 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.60999858 +0000 UTC m=+999.329871868" lastFinishedPulling="2026-01-24 07:58:23.200057863 +0000 UTC m=+1041.919931151" observedRunningTime="2026-01-24 07:58:26.058542681 +0000 UTC m=+1044.778415969" watchObservedRunningTime="2026-01-24 07:58:26.059500408 +0000 UTC m=+1044.779373696" Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.953937 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" event={"ID":"b14e84b5-9dcb-4280-9480-a6f34bf8c8dd","Type":"ContainerStarted","Data":"20a0d523843aaeb52c23a1dda5f44cc4531a4ecd0a65396205fce150c6172262"} Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.955113 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.958068 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sf2dc" event={"ID":"f9be272e-3f95-4d61-9846-8a8b6458c4e6","Type":"ContainerStarted","Data":"600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404"} Jan 24 07:58:26 crc kubenswrapper[4705]: I0124 07:58:26.980309 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" podStartSLOduration=2.594255852 podStartE2EDuration="47.980293225s" podCreationTimestamp="2026-01-24 07:57:39 +0000 UTC" firstStartedPulling="2026-01-24 07:57:40.669954166 +0000 UTC m=+999.389827464" lastFinishedPulling="2026-01-24 07:58:26.055991549 +0000 UTC m=+1044.775864837" observedRunningTime="2026-01-24 07:58:26.978675319 +0000 UTC m=+1045.698548607" watchObservedRunningTime="2026-01-24 07:58:26.980293225 +0000 UTC m=+1045.700166523" Jan 24 07:58:28 crc kubenswrapper[4705]: I0124 07:58:28.988556 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-nf4zc" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.004888 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sf2dc" podStartSLOduration=20.897840144 podStartE2EDuration="29.004871532s" podCreationTimestamp="2026-01-24 07:58:00 +0000 UTC" firstStartedPulling="2026-01-24 07:58:18.379453337 +0000 UTC m=+1037.099326615" lastFinishedPulling="2026-01-24 07:58:26.486484715 +0000 UTC m=+1045.206358003" observedRunningTime="2026-01-24 07:58:27.000963037 +0000 UTC m=+1045.720836335" watchObservedRunningTime="2026-01-24 07:58:29.004871532 +0000 UTC m=+1047.724744820" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.024691 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-dbvkx" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.025487 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-r5j5v" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.075532 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-sjc8r" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.077298 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-xsz7p" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.099695 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sj4dw" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.293904 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-s86rp" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.542655 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tgzdf" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.542711 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mpgjf" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.621714 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-869gl" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.628305 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-f2vcw" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.779076 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6445j" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.779471 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-k8q6j" Jan 24 07:58:29 crc kubenswrapper[4705]: I0124 07:58:29.960503 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7c64596589-v9zxl" Jan 24 07:58:30 crc kubenswrapper[4705]: I0124 07:58:30.008170 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-c6x57" Jan 24 07:58:30 crc kubenswrapper[4705]: I0124 07:58:30.029694 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-5nngs" Jan 24 07:58:30 crc kubenswrapper[4705]: I0124 07:58:30.674013 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:30 crc kubenswrapper[4705]: I0124 07:58:30.674108 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:30 crc kubenswrapper[4705]: I0124 07:58:30.713433 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:31 crc kubenswrapper[4705]: I0124 07:58:31.040044 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-l4fkg" Jan 24 07:58:31 crc kubenswrapper[4705]: I0124 07:58:31.223677 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz" Jan 24 07:58:31 crc kubenswrapper[4705]: I0124 07:58:31.672911 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8d6967975-rkwgg" Jan 24 07:58:32 crc kubenswrapper[4705]: I0124 07:58:32.047037 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:32 crc kubenswrapper[4705]: I0124 07:58:32.091028 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sf2dc"] Jan 24 07:58:34 crc kubenswrapper[4705]: I0124 07:58:34.013457 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sf2dc" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="registry-server" containerID="cri-o://600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404" gracePeriod=2 Jan 24 07:58:38 crc kubenswrapper[4705]: I0124 07:58:38.120194 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerID="600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404" exitCode=0 Jan 24 07:58:38 crc kubenswrapper[4705]: I0124 07:58:38.120266 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sf2dc" event={"ID":"f9be272e-3f95-4d61-9846-8a8b6458c4e6","Type":"ContainerDied","Data":"600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404"} Jan 24 07:58:39 crc kubenswrapper[4705]: I0124 07:58:39.242867 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v629x" Jan 24 07:58:39 crc kubenswrapper[4705]: I0124 07:58:39.542764 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x5h78" Jan 24 07:58:39 crc kubenswrapper[4705]: I0124 07:58:39.595155 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-drzkh" Jan 24 07:58:40 crc kubenswrapper[4705]: E0124 07:58:40.675022 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404 is running failed: container process not found" containerID="600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 07:58:40 crc kubenswrapper[4705]: E0124 07:58:40.675589 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404 is running failed: container process not found" containerID="600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 07:58:40 crc kubenswrapper[4705]: E0124 07:58:40.675988 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404 is running failed: container process not found" containerID="600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 07:58:40 crc kubenswrapper[4705]: E0124 07:58:40.676030 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-sf2dc" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="registry-server" Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.665946 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.690645 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb5j6\" (UniqueName: \"kubernetes.io/projected/f9be272e-3f95-4d61-9846-8a8b6458c4e6-kube-api-access-bb5j6\") pod \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.690687 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-catalog-content\") pod \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.702960 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9be272e-3f95-4d61-9846-8a8b6458c4e6-kube-api-access-bb5j6" (OuterVolumeSpecName: "kube-api-access-bb5j6") pod "f9be272e-3f95-4d61-9846-8a8b6458c4e6" (UID: "f9be272e-3f95-4d61-9846-8a8b6458c4e6"). InnerVolumeSpecName "kube-api-access-bb5j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.753158 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9be272e-3f95-4d61-9846-8a8b6458c4e6" (UID: "f9be272e-3f95-4d61-9846-8a8b6458c4e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.791670 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-utilities\") pod \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\" (UID: \"f9be272e-3f95-4d61-9846-8a8b6458c4e6\") " Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.792211 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb5j6\" (UniqueName: \"kubernetes.io/projected/f9be272e-3f95-4d61-9846-8a8b6458c4e6-kube-api-access-bb5j6\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.792234 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.792510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-utilities" (OuterVolumeSpecName: "utilities") pod "f9be272e-3f95-4d61-9846-8a8b6458c4e6" (UID: "f9be272e-3f95-4d61-9846-8a8b6458c4e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:58:43 crc kubenswrapper[4705]: I0124 07:58:43.893545 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9be272e-3f95-4d61-9846-8a8b6458c4e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:44 crc kubenswrapper[4705]: I0124 07:58:44.161123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sf2dc" event={"ID":"f9be272e-3f95-4d61-9846-8a8b6458c4e6","Type":"ContainerDied","Data":"b92d7a84f91c4840cc1843579829531dbd11f750243dda951699551d0af8dce7"} Jan 24 07:58:44 crc kubenswrapper[4705]: I0124 07:58:44.161179 4705 scope.go:117] "RemoveContainer" containerID="600a8fb17e937ddb870bd8707366087d7cb18aa2bc986254bee6b473f07e6404" Jan 24 07:58:44 crc kubenswrapper[4705]: I0124 07:58:44.161524 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sf2dc" Jan 24 07:58:44 crc kubenswrapper[4705]: I0124 07:58:44.188551 4705 scope.go:117] "RemoveContainer" containerID="a94f924dc7d1268eed7ee90eb634907ce636c0fd2e5977f684a492ceb189bbf8" Jan 24 07:58:44 crc kubenswrapper[4705]: I0124 07:58:44.191898 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sf2dc"] Jan 24 07:58:44 crc kubenswrapper[4705]: I0124 07:58:44.198194 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sf2dc"] Jan 24 07:58:44 crc kubenswrapper[4705]: I0124 07:58:44.234866 4705 scope.go:117] "RemoveContainer" containerID="ae0ffec1d153174ab7be78dfa5993f68cd4e56f784a9eaf7cf14761dfab148fb" Jan 24 07:58:45 crc kubenswrapper[4705]: I0124 07:58:45.584850 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" path="/var/lib/kubelet/pods/f9be272e-3f95-4d61-9846-8a8b6458c4e6/volumes" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.466223 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kg95d"] Jan 24 07:58:53 crc kubenswrapper[4705]: E0124 07:58:53.467292 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="extract-utilities" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.467377 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="extract-utilities" Jan 24 07:58:53 crc kubenswrapper[4705]: E0124 07:58:53.467411 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="extract-content" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.467418 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="extract-content" Jan 24 07:58:53 crc kubenswrapper[4705]: E0124 07:58:53.467427 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="registry-server" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.467433 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="registry-server" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.467713 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9be272e-3f95-4d61-9846-8a8b6458c4e6" containerName="registry-server" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.470153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.482423 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.482952 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.483443 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.483479 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-477r6" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.485865 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kg95d"] Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.542644 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rklp"] Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.543769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afceee1b-a636-4cc1-9d0b-8b1124f6370d-config\") pod \"dnsmasq-dns-675f4bcbfc-kg95d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.543895 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbbr\" (UniqueName: \"kubernetes.io/projected/afceee1b-a636-4cc1-9d0b-8b1124f6370d-kube-api-access-klbbr\") pod \"dnsmasq-dns-675f4bcbfc-kg95d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.543989 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.548894 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.555452 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rklp"] Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.644773 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afceee1b-a636-4cc1-9d0b-8b1124f6370d-config\") pod \"dnsmasq-dns-675f4bcbfc-kg95d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.644857 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klbbr\" (UniqueName: \"kubernetes.io/projected/afceee1b-a636-4cc1-9d0b-8b1124f6370d-kube-api-access-klbbr\") pod \"dnsmasq-dns-675f4bcbfc-kg95d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.646063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afceee1b-a636-4cc1-9d0b-8b1124f6370d-config\") pod \"dnsmasq-dns-675f4bcbfc-kg95d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.668837 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klbbr\" (UniqueName: \"kubernetes.io/projected/afceee1b-a636-4cc1-9d0b-8b1124f6370d-kube-api-access-klbbr\") pod \"dnsmasq-dns-675f4bcbfc-kg95d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.747182 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.747231 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqxm\" (UniqueName: \"kubernetes.io/projected/6f1e393c-6f19-4224-89ab-024dfdbfe04e-kube-api-access-7fqxm\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.747288 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-config\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.844115 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.847984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.848026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqxm\" (UniqueName: \"kubernetes.io/projected/6f1e393c-6f19-4224-89ab-024dfdbfe04e-kube-api-access-7fqxm\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.848096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-config\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.848835 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-config\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.848837 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.871755 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqxm\" (UniqueName: \"kubernetes.io/projected/6f1e393c-6f19-4224-89ab-024dfdbfe04e-kube-api-access-7fqxm\") pod \"dnsmasq-dns-78dd6ddcc-2rklp\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:53 crc kubenswrapper[4705]: I0124 07:58:53.877519 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:58:54 crc kubenswrapper[4705]: I0124 07:58:54.192805 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rklp"] Jan 24 07:58:54 crc kubenswrapper[4705]: I0124 07:58:54.209459 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:58:54 crc kubenswrapper[4705]: I0124 07:58:54.229444 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" event={"ID":"6f1e393c-6f19-4224-89ab-024dfdbfe04e","Type":"ContainerStarted","Data":"146e8acf5dd69a41b89a01348397155f6b1c5ca4b1fa08619cf6688626e9821e"} Jan 24 07:58:54 crc kubenswrapper[4705]: I0124 07:58:54.321102 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kg95d"] Jan 24 07:58:54 crc kubenswrapper[4705]: W0124 07:58:54.331562 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafceee1b_a636_4cc1_9d0b_8b1124f6370d.slice/crio-6926b70766b4e76c8806d1b117ce89a7855d1c6b15b33602a7d01b2ca869c93a WatchSource:0}: Error finding container 6926b70766b4e76c8806d1b117ce89a7855d1c6b15b33602a7d01b2ca869c93a: Status 404 returned error can't find the container with id 6926b70766b4e76c8806d1b117ce89a7855d1c6b15b33602a7d01b2ca869c93a Jan 24 07:58:55 crc kubenswrapper[4705]: I0124 07:58:55.239717 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" event={"ID":"afceee1b-a636-4cc1-9d0b-8b1124f6370d","Type":"ContainerStarted","Data":"6926b70766b4e76c8806d1b117ce89a7855d1c6b15b33602a7d01b2ca869c93a"} Jan 24 07:58:56 crc kubenswrapper[4705]: I0124 07:58:56.872549 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kg95d"] Jan 24 07:58:56 crc kubenswrapper[4705]: I0124 07:58:56.926576 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vwcq9"] Jan 24 07:58:56 crc kubenswrapper[4705]: I0124 07:58:56.927969 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.063679 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62h8z\" (UniqueName: \"kubernetes.io/projected/88e64eab-eb47-47b4-aae0-86bb27fab696-kube-api-access-62h8z\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.064623 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-config\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.064720 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.108795 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vwcq9"] Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.174073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62h8z\" (UniqueName: \"kubernetes.io/projected/88e64eab-eb47-47b4-aae0-86bb27fab696-kube-api-access-62h8z\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.174152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-config\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.174223 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.175415 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-config\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.175488 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.203869 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62h8z\" (UniqueName: \"kubernetes.io/projected/88e64eab-eb47-47b4-aae0-86bb27fab696-kube-api-access-62h8z\") pod \"dnsmasq-dns-666b6646f7-vwcq9\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.262201 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.536439 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rklp"] Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.575122 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-szr7x"] Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.793690 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.924467 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-szr7x"] Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.985051 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p528\" (UniqueName: \"kubernetes.io/projected/282f3533-7f86-4400-8533-480ce5bb9c55-kube-api-access-9p528\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.985100 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-config\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:57 crc kubenswrapper[4705]: I0124 07:58:57.985241 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.182472 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p528\" (UniqueName: \"kubernetes.io/projected/282f3533-7f86-4400-8533-480ce5bb9c55-kube-api-access-9p528\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.182504 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-config\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.182539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.183381 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.188387 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-config\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.218007 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.219012 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p528\" (UniqueName: \"kubernetes.io/projected/282f3533-7f86-4400-8533-480ce5bb9c55-kube-api-access-9p528\") pod \"dnsmasq-dns-57d769cc4f-szr7x\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.220204 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.222520 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.281813 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.282131 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.282232 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.282293 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.282442 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.282583 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4gkrp" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.282685 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.385680 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.385774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-config-data\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.385807 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.386046 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.386140 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.386692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczsh\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-kube-api-access-mczsh\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.386813 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.386875 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.386967 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.387006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.387231 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.564625 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565443 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczsh\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-kube-api-access-mczsh\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565551 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565598 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565652 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565862 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-config-data\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565890 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.565913 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.566014 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.566069 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.573005 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.574900 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.578001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.578723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.578935 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.579323 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.579643 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-config-data\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.580355 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.580632 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vwcq9"] Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.582810 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.591498 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.600795 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczsh\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-kube-api-access-mczsh\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.621069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: W0124 07:58:58.660638 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88e64eab_eb47_47b4_aae0_86bb27fab696.slice/crio-859f6090a573a138dff4630168a9b9def7d75ae179d891c142abd00f1710ec8e WatchSource:0}: Error finding container 859f6090a573a138dff4630168a9b9def7d75ae179d891c142abd00f1710ec8e: Status 404 returned error can't find the container with id 859f6090a573a138dff4630168a9b9def7d75ae179d891c142abd00f1710ec8e Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.908965 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.917940 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.920038 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.922780 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.922911 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.923003 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.923102 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.923138 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.923298 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.923454 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5hpl7" Jan 24 07:58:58 crc kubenswrapper[4705]: I0124 07:58:58.925921 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155128 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155450 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a437d6-0b75-49b5-a509-e9dd8beefa45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155479 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155493 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a437d6-0b75-49b5-a509-e9dd8beefa45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155528 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155560 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155612 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2th7\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-kube-api-access-p2th7\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155667 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.155687 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.164504 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.167656 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.173895 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hc8wr" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.174149 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.174191 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.175815 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.186162 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.187920 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260190 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260267 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-kolla-config\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2th7\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-kube-api-access-p2th7\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260377 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a51efd-0ac3-4c02-8052-5b4017444820-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260411 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260466 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-operator-scripts\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260540 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260568 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwrp4\" (UniqueName: \"kubernetes.io/projected/95a51efd-0ac3-4c02-8052-5b4017444820-kube-api-access-qwrp4\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260593 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a437d6-0b75-49b5-a509-e9dd8beefa45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260623 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a437d6-0b75-49b5-a509-e9dd8beefa45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260672 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-config-data-default\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260705 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260718 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a51efd-0ac3-4c02-8052-5b4017444820-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.260785 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95a51efd-0ac3-4c02-8052-5b4017444820-config-data-generated\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.261672 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.261796 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.262357 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.263440 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.264654 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.316611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2th7\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-kube-api-access-p2th7\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.328805 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.330417 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a437d6-0b75-49b5-a509-e9dd8beefa45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.337401 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.349562 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a437d6-0b75-49b5-a509-e9dd8beefa45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.357182 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-operator-scripts\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368588 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwrp4\" (UniqueName: \"kubernetes.io/projected/95a51efd-0ac3-4c02-8052-5b4017444820-kube-api-access-qwrp4\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368629 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-config-data-default\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368653 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a51efd-0ac3-4c02-8052-5b4017444820-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368690 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95a51efd-0ac3-4c02-8052-5b4017444820-config-data-generated\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368740 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-kolla-config\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368771 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a51efd-0ac3-4c02-8052-5b4017444820-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.368799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.369492 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.370352 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-operator-scripts\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.370457 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" event={"ID":"88e64eab-eb47-47b4-aae0-86bb27fab696","Type":"ContainerStarted","Data":"859f6090a573a138dff4630168a9b9def7d75ae179d891c142abd00f1710ec8e"} Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.371022 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95a51efd-0ac3-4c02-8052-5b4017444820-config-data-generated\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.371721 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-config-data-default\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.372164 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95a51efd-0ac3-4c02-8052-5b4017444820-kolla-config\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.375574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a51efd-0ac3-4c02-8052-5b4017444820-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.376517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a51efd-0ac3-4c02-8052-5b4017444820-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.399197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.399845 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwrp4\" (UniqueName: \"kubernetes.io/projected/95a51efd-0ac3-4c02-8052-5b4017444820-kube-api-access-qwrp4\") pod \"openstack-galera-0\" (UID: \"95a51efd-0ac3-4c02-8052-5b4017444820\") " pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.430903 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-szr7x"] Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.552304 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.636280 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 07:58:59 crc kubenswrapper[4705]: I0124 07:58:59.738274 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:58:59 crc kubenswrapper[4705]: W0124 07:58:59.795498 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6466e4f6_65ac_4f90_99a4_6cf7bc77bc57.slice/crio-330f39a76d7682b6bd6a6d0cc5a5230ffb615544da05699d23ce9ad3764454ef WatchSource:0}: Error finding container 330f39a76d7682b6bd6a6d0cc5a5230ffb615544da05699d23ce9ad3764454ef: Status 404 returned error can't find the container with id 330f39a76d7682b6bd6a6d0cc5a5230ffb615544da05699d23ce9ad3764454ef Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.540383 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.543246 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.559309 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.559595 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bjqvq" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.560508 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.560770 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.566229 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.573482 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57","Type":"ContainerStarted","Data":"330f39a76d7682b6bd6a6d0cc5a5230ffb615544da05699d23ce9ad3764454ef"} Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.577678 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" event={"ID":"282f3533-7f86-4400-8533-480ce5bb9c55","Type":"ContainerStarted","Data":"bd35e75039d4f680e25b96ce7a172780b9746a39468d296c4a0bcc2badf582e2"} Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.651209 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657602 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657668 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657704 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657728 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657743 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c654n\" (UniqueName: \"kubernetes.io/projected/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-kube-api-access-c654n\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657767 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.657794 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.758883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759017 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759056 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759171 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759189 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c654n\" (UniqueName: \"kubernetes.io/projected/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-kube-api-access-c654n\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759212 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.759356 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.760139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.760212 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.760612 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.761284 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.766061 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.766613 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.847432 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c654n\" (UniqueName: \"kubernetes.io/projected/9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b-kube-api-access-c654n\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.856306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:00 crc kubenswrapper[4705]: I0124 07:59:00.924635 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.111841 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.113360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.119265 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-f4l94" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.119351 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.119540 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.121117 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.141180 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 07:59:01 crc kubenswrapper[4705]: W0124 07:59:01.423211 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95a51efd_0ac3_4c02_8052_5b4017444820.slice/crio-658a373b86176e0c2d1b5bc11d94bb2df3bd8f19d78d17eee7b19b5be41a35c6 WatchSource:0}: Error finding container 658a373b86176e0c2d1b5bc11d94bb2df3bd8f19d78d17eee7b19b5be41a35c6: Status 404 returned error can't find the container with id 658a373b86176e0c2d1b5bc11d94bb2df3bd8f19d78d17eee7b19b5be41a35c6 Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.536244 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.536301 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.536361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbh2\" (UniqueName: \"kubernetes.io/projected/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-kube-api-access-zpbh2\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.536397 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-config-data\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.536429 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-kolla-config\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.638440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.638521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbh2\" (UniqueName: \"kubernetes.io/projected/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-kube-api-access-zpbh2\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.638563 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-config-data\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.638608 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-kolla-config\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.638641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.643194 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.651389 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.651563 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.658441 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-kolla-config\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.659464 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.660164 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-config-data\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.695133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbh2\" (UniqueName: \"kubernetes.io/projected/ec9c2213-448d-4532-b6a6-3f6242f5ab5f-kube-api-access-zpbh2\") pod \"memcached-0\" (UID: \"ec9c2213-448d-4532-b6a6-3f6242f5ab5f\") " pod="openstack/memcached-0" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.703368 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"95a51efd-0ac3-4c02-8052-5b4017444820","Type":"ContainerStarted","Data":"658a373b86176e0c2d1b5bc11d94bb2df3bd8f19d78d17eee7b19b5be41a35c6"} Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.712586 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14a437d6-0b75-49b5-a509-e9dd8beefa45","Type":"ContainerStarted","Data":"c4db1c9b8a80cca84f423ad03382da94d3c292da3a1e65e6daad0afa255cf58a"} Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.755322 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-f4l94" Jan 24 07:59:01 crc kubenswrapper[4705]: I0124 07:59:01.764476 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 07:59:02 crc kubenswrapper[4705]: I0124 07:59:02.465887 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:59:02 crc kubenswrapper[4705]: I0124 07:59:02.763017 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b","Type":"ContainerStarted","Data":"5cfae10a48ea752b7874586bd2ddbe1c061f1e75418d1dff8c6b3ba9f60134bf"} Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.090963 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.245293 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.248513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.251375 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gvsrm" Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.256493 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.524742 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqbz\" (UniqueName: \"kubernetes.io/projected/c1bb965b-b26a-4471-86ef-467dde0aea03-kube-api-access-mkqbz\") pod \"kube-state-metrics-0\" (UID: \"c1bb965b-b26a-4471-86ef-467dde0aea03\") " pod="openstack/kube-state-metrics-0" Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.701484 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkqbz\" (UniqueName: \"kubernetes.io/projected/c1bb965b-b26a-4471-86ef-467dde0aea03-kube-api-access-mkqbz\") pod \"kube-state-metrics-0\" (UID: \"c1bb965b-b26a-4471-86ef-467dde0aea03\") " pod="openstack/kube-state-metrics-0" Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.737073 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkqbz\" (UniqueName: \"kubernetes.io/projected/c1bb965b-b26a-4471-86ef-467dde0aea03-kube-api-access-mkqbz\") pod \"kube-state-metrics-0\" (UID: \"c1bb965b-b26a-4471-86ef-467dde0aea03\") " pod="openstack/kube-state-metrics-0" Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.819644 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ec9c2213-448d-4532-b6a6-3f6242f5ab5f","Type":"ContainerStarted","Data":"3b3f3717c03b3142a8449973eb1e850108e16e8919ba1273de458b896de80dbf"} Jan 24 07:59:03 crc kubenswrapper[4705]: I0124 07:59:03.980369 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:59:04 crc kubenswrapper[4705]: W0124 07:59:04.948274 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1bb965b_b26a_4471_86ef_467dde0aea03.slice/crio-37cb415cb31074999e99cfabf10051095265c15309df9761681d689e7a3f7986 WatchSource:0}: Error finding container 37cb415cb31074999e99cfabf10051095265c15309df9761681d689e7a3f7986: Status 404 returned error can't find the container with id 37cb415cb31074999e99cfabf10051095265c15309df9761681d689e7a3f7986 Jan 24 07:59:04 crc kubenswrapper[4705]: I0124 07:59:04.952337 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:59:05 crc kubenswrapper[4705]: I0124 07:59:05.967569 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c1bb965b-b26a-4471-86ef-467dde0aea03","Type":"ContainerStarted","Data":"37cb415cb31074999e99cfabf10051095265c15309df9761681d689e7a3f7986"} Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.675935 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dqhqz"] Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.683056 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.686986 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-llw8s"] Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.688943 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xhkqc" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.689415 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.689586 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.690951 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.705989 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-llw8s"] Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.718193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dqhqz"] Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790344 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-run\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-combined-ca-bundle\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790500 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-log\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790628 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xfm\" (UniqueName: \"kubernetes.io/projected/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-kube-api-access-t2xfm\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-etc-ovs\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790811 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scg2f\" (UniqueName: \"kubernetes.io/projected/1972dfce-f49c-481e-a252-f1c8ad52ecc5-kube-api-access-scg2f\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-log-ovn\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.790896 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-run-ovn\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.791037 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-scripts\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.791074 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1972dfce-f49c-481e-a252-f1c8ad52ecc5-scripts\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.791112 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-lib\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.791151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-ovn-controller-tls-certs\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.791197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-run\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892176 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scg2f\" (UniqueName: \"kubernetes.io/projected/1972dfce-f49c-481e-a252-f1c8ad52ecc5-kube-api-access-scg2f\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892342 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-log-ovn\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-run-ovn\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892387 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-scripts\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892412 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1972dfce-f49c-481e-a252-f1c8ad52ecc5-scripts\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892437 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-lib\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892456 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-ovn-controller-tls-certs\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892477 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-run\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892541 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-run\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892561 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-combined-ca-bundle\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892600 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-log\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xfm\" (UniqueName: \"kubernetes.io/projected/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-kube-api-access-t2xfm\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.892681 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-etc-ovs\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.893172 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-etc-ovs\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.893298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-run\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.893380 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-run\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.894281 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-lib\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.895448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1972dfce-f49c-481e-a252-f1c8ad52ecc5-var-log\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.895939 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-log-ovn\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.896080 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-scripts\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.896285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-var-run-ovn\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.909358 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-combined-ca-bundle\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.910774 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xfm\" (UniqueName: \"kubernetes.io/projected/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-kube-api-access-t2xfm\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.910790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1972dfce-f49c-481e-a252-f1c8ad52ecc5-scripts\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.932291 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e50e3aa7-48d0-4559-9f09-f0a9a54232a7-ovn-controller-tls-certs\") pod \"ovn-controller-dqhqz\" (UID: \"e50e3aa7-48d0-4559-9f09-f0a9a54232a7\") " pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:06 crc kubenswrapper[4705]: I0124 07:59:06.941374 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scg2f\" (UniqueName: \"kubernetes.io/projected/1972dfce-f49c-481e-a252-f1c8ad52ecc5-kube-api-access-scg2f\") pod \"ovn-controller-ovs-llw8s\" (UID: \"1972dfce-f49c-481e-a252-f1c8ad52ecc5\") " pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.030563 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.134593 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.161713 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.163469 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.169061 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.171696 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-s2xhr" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.172361 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.172538 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.172674 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.174159 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.457569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac239835-9243-4353-8ca5-ff79405c5009-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.457846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac239835-9243-4353-8ca5-ff79405c5009-config\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.457901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.457956 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsqh\" (UniqueName: \"kubernetes.io/projected/ac239835-9243-4353-8ca5-ff79405c5009-kube-api-access-wdsqh\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.458006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.458031 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac239835-9243-4353-8ca5-ff79405c5009-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.458072 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.458121 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559430 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac239835-9243-4353-8ca5-ff79405c5009-config\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsqh\" (UniqueName: \"kubernetes.io/projected/ac239835-9243-4353-8ca5-ff79405c5009-kube-api-access-wdsqh\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559625 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac239835-9243-4353-8ca5-ff79405c5009-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559652 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559685 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.559703 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac239835-9243-4353-8ca5-ff79405c5009-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.560503 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac239835-9243-4353-8ca5-ff79405c5009-config\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.560789 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac239835-9243-4353-8ca5-ff79405c5009-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.561042 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.561469 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac239835-9243-4353-8ca5-ff79405c5009-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.568125 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.574751 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.593784 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac239835-9243-4353-8ca5-ff79405c5009-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.599924 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsqh\" (UniqueName: \"kubernetes.io/projected/ac239835-9243-4353-8ca5-ff79405c5009-kube-api-access-wdsqh\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.608981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ac239835-9243-4353-8ca5-ff79405c5009\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:07 crc kubenswrapper[4705]: I0124 07:59:07.849906 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.845649 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.847119 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.853436 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.853860 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9mtqp" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.854226 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.856617 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.865300 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.866787 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-config\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.866838 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.866873 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.866906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfpg\" (UniqueName: \"kubernetes.io/projected/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-kube-api-access-bzfpg\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.866944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.866968 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.867005 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.867110 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.970033 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.971050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfpg\" (UniqueName: \"kubernetes.io/projected/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-kube-api-access-bzfpg\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.971112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.971144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.971233 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.971339 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.971484 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.971533 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-config\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.972516 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.973610 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.973815 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.985897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-config\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:09 crc kubenswrapper[4705]: I0124 07:59:09.999763 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:10 crc kubenswrapper[4705]: I0124 07:59:10.008575 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:10 crc kubenswrapper[4705]: I0124 07:59:10.012665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:10 crc kubenswrapper[4705]: I0124 07:59:10.013779 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfpg\" (UniqueName: \"kubernetes.io/projected/6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6-kube-api-access-bzfpg\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:10 crc kubenswrapper[4705]: I0124 07:59:10.020302 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:10 crc kubenswrapper[4705]: I0124 07:59:10.183281 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:29 crc kubenswrapper[4705]: E0124 07:59:29.702634 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 24 07:59:29 crc kubenswrapper[4705]: E0124 07:59:29.704791 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2th7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(14a437d6-0b75-49b5-a509-e9dd8beefa45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:29 crc kubenswrapper[4705]: E0124 07:59:29.707019 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" Jan 24 07:59:29 crc kubenswrapper[4705]: E0124 07:59:29.714022 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 24 07:59:29 crc kubenswrapper[4705]: E0124 07:59:29.714224 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mczsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(6466e4f6-65ac-4f90-99a4-6cf7bc77bc57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:29 crc kubenswrapper[4705]: E0124 07:59:29.715498 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" Jan 24 07:59:30 crc kubenswrapper[4705]: E0124 07:59:30.305703 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" Jan 24 07:59:30 crc kubenswrapper[4705]: E0124 07:59:30.306258 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" Jan 24 07:59:33 crc kubenswrapper[4705]: E0124 07:59:33.775333 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 24 07:59:33 crc kubenswrapper[4705]: E0124 07:59:33.775872 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c654n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:33 crc kubenswrapper[4705]: E0124 07:59:33.777336 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b" Jan 24 07:59:34 crc kubenswrapper[4705]: E0124 07:59:34.334404 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b" Jan 24 07:59:34 crc kubenswrapper[4705]: E0124 07:59:34.385365 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 24 07:59:34 crc kubenswrapper[4705]: E0124 07:59:34.385614 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n676h695h57fh664h86hdbhc7h686h69h5c5hb4h597h5f7hbbh596hd7h546hf5h59fh5d5h65chf9h57bh57ch5hd6h67chcch64bh54h6bh95q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpbh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(ec9c2213-448d-4532-b6a6-3f6242f5ab5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:34 crc kubenswrapper[4705]: E0124 07:59:34.386813 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="ec9c2213-448d-4532-b6a6-3f6242f5ab5f" Jan 24 07:59:34 crc kubenswrapper[4705]: E0124 07:59:34.408459 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 24 07:59:34 crc kubenswrapper[4705]: E0124 07:59:34.408898 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwrp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(95a51efd-0ac3-4c02-8052-5b4017444820): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:34 crc kubenswrapper[4705]: E0124 07:59:34.410096 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="95a51efd-0ac3-4c02-8052-5b4017444820" Jan 24 07:59:35 crc kubenswrapper[4705]: E0124 07:59:35.352002 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="95a51efd-0ac3-4c02-8052-5b4017444820" Jan 24 07:59:35 crc kubenswrapper[4705]: E0124 07:59:35.352015 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="ec9c2213-448d-4532-b6a6-3f6242f5ab5f" Jan 24 07:59:39 crc kubenswrapper[4705]: I0124 07:59:39.334993 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dqhqz"] Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.337311 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.337753 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62h8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-vwcq9_openstack(88e64eab-eb47-47b4-aae0-86bb27fab696): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.339196 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" podUID="88e64eab-eb47-47b4-aae0-86bb27fab696" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.345699 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.345850 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klbbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kg95d_openstack(afceee1b-a636-4cc1-9d0b-8b1124f6370d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.346909 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.346976 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" podUID="afceee1b-a636-4cc1-9d0b-8b1124f6370d" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.347071 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2rklp_openstack(6f1e393c-6f19-4224-89ab-024dfdbfe04e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.348796 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" podUID="6f1e393c-6f19-4224-89ab-024dfdbfe04e" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.444352 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" podUID="88e64eab-eb47-47b4-aae0-86bb27fab696" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.581419 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.581939 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9p528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-szr7x_openstack(282f3533-7f86-4400-8533-480ce5bb9c55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:59:41 crc kubenswrapper[4705]: E0124 07:59:41.583584 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" podUID="282f3533-7f86-4400-8533-480ce5bb9c55" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.189456 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.190962 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.204535 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.319775 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.330526 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-config\") pod \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.330661 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fqxm\" (UniqueName: \"kubernetes.io/projected/6f1e393c-6f19-4224-89ab-024dfdbfe04e-kube-api-access-7fqxm\") pod \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.330707 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-dns-svc\") pod \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\" (UID: \"6f1e393c-6f19-4224-89ab-024dfdbfe04e\") " Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.330751 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klbbr\" (UniqueName: \"kubernetes.io/projected/afceee1b-a636-4cc1-9d0b-8b1124f6370d-kube-api-access-klbbr\") pod \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.330858 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afceee1b-a636-4cc1-9d0b-8b1124f6370d-config\") pod \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\" (UID: \"afceee1b-a636-4cc1-9d0b-8b1124f6370d\") " Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.331681 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-config" (OuterVolumeSpecName: "config") pod "6f1e393c-6f19-4224-89ab-024dfdbfe04e" (UID: "6f1e393c-6f19-4224-89ab-024dfdbfe04e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.332451 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afceee1b-a636-4cc1-9d0b-8b1124f6370d-config" (OuterVolumeSpecName: "config") pod "afceee1b-a636-4cc1-9d0b-8b1124f6370d" (UID: "afceee1b-a636-4cc1-9d0b-8b1124f6370d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.332702 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f1e393c-6f19-4224-89ab-024dfdbfe04e" (UID: "6f1e393c-6f19-4224-89ab-024dfdbfe04e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.341007 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f1e393c-6f19-4224-89ab-024dfdbfe04e-kube-api-access-7fqxm" (OuterVolumeSpecName: "kube-api-access-7fqxm") pod "6f1e393c-6f19-4224-89ab-024dfdbfe04e" (UID: "6f1e393c-6f19-4224-89ab-024dfdbfe04e"). InnerVolumeSpecName "kube-api-access-7fqxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.353687 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afceee1b-a636-4cc1-9d0b-8b1124f6370d-kube-api-access-klbbr" (OuterVolumeSpecName: "kube-api-access-klbbr") pod "afceee1b-a636-4cc1-9d0b-8b1124f6370d" (UID: "afceee1b-a636-4cc1-9d0b-8b1124f6370d"). InnerVolumeSpecName "kube-api-access-klbbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:59:42 crc kubenswrapper[4705]: W0124 07:59:42.408361 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d12964f_7c7d_48bc_8cc0_8c4b5e7ea8f6.slice/crio-c4ed4490a3048ea26e3fb059508fced4f702a6a1cad1dbc2197c942c0c6d2981 WatchSource:0}: Error finding container c4ed4490a3048ea26e3fb059508fced4f702a6a1cad1dbc2197c942c0c6d2981: Status 404 returned error can't find the container with id c4ed4490a3048ea26e3fb059508fced4f702a6a1cad1dbc2197c942c0c6d2981 Jan 24 07:59:42 crc kubenswrapper[4705]: W0124 07:59:42.415009 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac239835_9243_4353_8ca5_ff79405c5009.slice/crio-2c24fa68a74068b0c17b70a6edb3b093ac377a9252324dbe872853c0eae4c0fc WatchSource:0}: Error finding container 2c24fa68a74068b0c17b70a6edb3b093ac377a9252324dbe872853c0eae4c0fc: Status 404 returned error can't find the container with id 2c24fa68a74068b0c17b70a6edb3b093ac377a9252324dbe872853c0eae4c0fc Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.415328 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-llw8s"] Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.432909 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fqxm\" (UniqueName: \"kubernetes.io/projected/6f1e393c-6f19-4224-89ab-024dfdbfe04e-kube-api-access-7fqxm\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.432938 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.432949 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klbbr\" (UniqueName: \"kubernetes.io/projected/afceee1b-a636-4cc1-9d0b-8b1124f6370d-kube-api-access-klbbr\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.432959 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afceee1b-a636-4cc1-9d0b-8b1124f6370d-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.432967 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f1e393c-6f19-4224-89ab-024dfdbfe04e-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.450970 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ac239835-9243-4353-8ca5-ff79405c5009","Type":"ContainerStarted","Data":"2c24fa68a74068b0c17b70a6edb3b093ac377a9252324dbe872853c0eae4c0fc"} Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.452180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dqhqz" event={"ID":"e50e3aa7-48d0-4559-9f09-f0a9a54232a7","Type":"ContainerStarted","Data":"a220835ba835f87ad857cb361c631c9f4c3c9d1e9c5f14548a4b4998cb304516"} Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.453440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" event={"ID":"afceee1b-a636-4cc1-9d0b-8b1124f6370d","Type":"ContainerDied","Data":"6926b70766b4e76c8806d1b117ce89a7855d1c6b15b33602a7d01b2ca869c93a"} Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.453481 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kg95d" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.454684 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6","Type":"ContainerStarted","Data":"c4ed4490a3048ea26e3fb059508fced4f702a6a1cad1dbc2197c942c0c6d2981"} Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.456382 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.458048 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2rklp" event={"ID":"6f1e393c-6f19-4224-89ab-024dfdbfe04e","Type":"ContainerDied","Data":"146e8acf5dd69a41b89a01348397155f6b1c5ca4b1fa08619cf6688626e9821e"} Jan 24 07:59:42 crc kubenswrapper[4705]: E0124 07:59:42.458375 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" podUID="282f3533-7f86-4400-8533-480ce5bb9c55" Jan 24 07:59:42 crc kubenswrapper[4705]: W0124 07:59:42.502656 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1972dfce_f49c_481e_a252_f1c8ad52ecc5.slice/crio-0f33b63ac8f9c32a95ff23d304eedc75185d3f9efab1b8971799e217778fb8de WatchSource:0}: Error finding container 0f33b63ac8f9c32a95ff23d304eedc75185d3f9efab1b8971799e217778fb8de: Status 404 returned error can't find the container with id 0f33b63ac8f9c32a95ff23d304eedc75185d3f9efab1b8971799e217778fb8de Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.546908 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rklp"] Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.565924 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rklp"] Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.583168 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kg95d"] Jan 24 07:59:42 crc kubenswrapper[4705]: I0124 07:59:42.591190 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kg95d"] Jan 24 07:59:43 crc kubenswrapper[4705]: I0124 07:59:43.463888 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-llw8s" event={"ID":"1972dfce-f49c-481e-a252-f1c8ad52ecc5","Type":"ContainerStarted","Data":"0f33b63ac8f9c32a95ff23d304eedc75185d3f9efab1b8971799e217778fb8de"} Jan 24 07:59:43 crc kubenswrapper[4705]: I0124 07:59:43.585446 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f1e393c-6f19-4224-89ab-024dfdbfe04e" path="/var/lib/kubelet/pods/6f1e393c-6f19-4224-89ab-024dfdbfe04e/volumes" Jan 24 07:59:43 crc kubenswrapper[4705]: I0124 07:59:43.586169 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afceee1b-a636-4cc1-9d0b-8b1124f6370d" path="/var/lib/kubelet/pods/afceee1b-a636-4cc1-9d0b-8b1124f6370d/volumes" Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.485095 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ac239835-9243-4353-8ca5-ff79405c5009","Type":"ContainerStarted","Data":"eb36c7b34eb42d6f93110b2619e2ed12b1b90e840a88e83a12b9d7c98fafe0ca"} Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.487001 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dqhqz" event={"ID":"e50e3aa7-48d0-4559-9f09-f0a9a54232a7","Type":"ContainerStarted","Data":"cdafbaf7162d6567598b4ec0a19f854f9c335c96a16862b1b52ed9fe691f51cc"} Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.487780 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dqhqz" Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.489742 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6","Type":"ContainerStarted","Data":"c5f994c8d32f31faa5658bd2ea863bdeb2432462167e2deaaf935d24ea7773c7"} Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.491314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c1bb965b-b26a-4471-86ef-467dde0aea03","Type":"ContainerStarted","Data":"af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de"} Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.491510 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.529903 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dqhqz" podStartSLOduration=35.993385688000004 podStartE2EDuration="40.529869369s" podCreationTimestamp="2026-01-24 07:59:06 +0000 UTC" firstStartedPulling="2026-01-24 07:59:41.5351279 +0000 UTC m=+1120.255001188" lastFinishedPulling="2026-01-24 07:59:46.071611581 +0000 UTC m=+1124.791484869" observedRunningTime="2026-01-24 07:59:46.505549533 +0000 UTC m=+1125.225422821" watchObservedRunningTime="2026-01-24 07:59:46.529869369 +0000 UTC m=+1125.249742657" Jan 24 07:59:46 crc kubenswrapper[4705]: I0124 07:59:46.539583 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.399790045 podStartE2EDuration="43.539558151s" podCreationTimestamp="2026-01-24 07:59:03 +0000 UTC" firstStartedPulling="2026-01-24 07:59:04.957098107 +0000 UTC m=+1083.676971395" lastFinishedPulling="2026-01-24 07:59:46.096866213 +0000 UTC m=+1124.816739501" observedRunningTime="2026-01-24 07:59:46.521601856 +0000 UTC m=+1125.241475144" watchObservedRunningTime="2026-01-24 07:59:46.539558151 +0000 UTC m=+1125.259431439" Jan 24 07:59:47 crc kubenswrapper[4705]: I0124 07:59:47.506633 4705 generic.go:334] "Generic (PLEG): container finished" podID="1972dfce-f49c-481e-a252-f1c8ad52ecc5" containerID="52223ce6abf39e9a3b8a0c9c2e8e8a0d6df0e9cfa2af2d2aec20c0ab2d5ba1c2" exitCode=0 Jan 24 07:59:47 crc kubenswrapper[4705]: I0124 07:59:47.506867 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-llw8s" event={"ID":"1972dfce-f49c-481e-a252-f1c8ad52ecc5","Type":"ContainerDied","Data":"52223ce6abf39e9a3b8a0c9c2e8e8a0d6df0e9cfa2af2d2aec20c0ab2d5ba1c2"} Jan 24 07:59:47 crc kubenswrapper[4705]: I0124 07:59:47.513676 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14a437d6-0b75-49b5-a509-e9dd8beefa45","Type":"ContainerStarted","Data":"3b1c3bad40fd8d7a85b987d091c4cf75d636a9ba03e19f88b0e3a1f4db4f1716"} Jan 24 07:59:48 crc kubenswrapper[4705]: I0124 07:59:48.523675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57","Type":"ContainerStarted","Data":"241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142"} Jan 24 07:59:48 crc kubenswrapper[4705]: I0124 07:59:48.528259 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-llw8s" event={"ID":"1972dfce-f49c-481e-a252-f1c8ad52ecc5","Type":"ContainerStarted","Data":"33d269e28ca72d00136df1490dd21edc8178c1092ead9532c262d5d789cad200"} Jan 24 07:59:48 crc kubenswrapper[4705]: I0124 07:59:48.530669 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ec9c2213-448d-4532-b6a6-3f6242f5ab5f","Type":"ContainerStarted","Data":"1c0f4d17fbb37ccefd99c5c34c6159701f4449e2e4ca50285620e1c4b81bba4e"} Jan 24 07:59:48 crc kubenswrapper[4705]: I0124 07:59:48.531131 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.545375 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6","Type":"ContainerStarted","Data":"edd81ee8ad6c04fcec99ae876d23f978e13dbd019c20901f8e603f9670eba9bf"} Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.547889 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-llw8s" event={"ID":"1972dfce-f49c-481e-a252-f1c8ad52ecc5","Type":"ContainerStarted","Data":"db1c56433da5c21af99f1d98610849cf0766f104a8f05a9128eb51900c1cb78f"} Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.548120 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.548143 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-llw8s" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.552361 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ac239835-9243-4353-8ca5-ff79405c5009","Type":"ContainerStarted","Data":"68bee3449fa78f05b80ea1bc76605b06f0109055cf023dad79669bc388cdec0b"} Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.568785 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.622704086 podStartE2EDuration="48.568766607s" podCreationTimestamp="2026-01-24 07:59:01 +0000 UTC" firstStartedPulling="2026-01-24 07:59:03.135091666 +0000 UTC m=+1081.854964954" lastFinishedPulling="2026-01-24 07:59:47.081154187 +0000 UTC m=+1125.801027475" observedRunningTime="2026-01-24 07:59:48.567946156 +0000 UTC m=+1127.287819454" watchObservedRunningTime="2026-01-24 07:59:49.568766607 +0000 UTC m=+1128.288639895" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.570520 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=34.664313476 podStartE2EDuration="41.570515047s" podCreationTimestamp="2026-01-24 07:59:08 +0000 UTC" firstStartedPulling="2026-01-24 07:59:42.410449265 +0000 UTC m=+1121.130322553" lastFinishedPulling="2026-01-24 07:59:49.316650836 +0000 UTC m=+1128.036524124" observedRunningTime="2026-01-24 07:59:49.567122631 +0000 UTC m=+1128.286995929" watchObservedRunningTime="2026-01-24 07:59:49.570515047 +0000 UTC m=+1128.290388335" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.590299 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-llw8s" podStartSLOduration=40.018070622 podStartE2EDuration="43.590284283s" podCreationTimestamp="2026-01-24 07:59:06 +0000 UTC" firstStartedPulling="2026-01-24 07:59:42.534772987 +0000 UTC m=+1121.254646275" lastFinishedPulling="2026-01-24 07:59:46.106986648 +0000 UTC m=+1124.826859936" observedRunningTime="2026-01-24 07:59:49.58766835 +0000 UTC m=+1128.307541638" watchObservedRunningTime="2026-01-24 07:59:49.590284283 +0000 UTC m=+1128.310157571" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.617054 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=36.732420894 podStartE2EDuration="43.617037637s" podCreationTimestamp="2026-01-24 07:59:06 +0000 UTC" firstStartedPulling="2026-01-24 07:59:42.419614113 +0000 UTC m=+1121.139487401" lastFinishedPulling="2026-01-24 07:59:49.304230866 +0000 UTC m=+1128.024104144" observedRunningTime="2026-01-24 07:59:49.61429226 +0000 UTC m=+1128.334165548" watchObservedRunningTime="2026-01-24 07:59:49.617037637 +0000 UTC m=+1128.336910925" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.850763 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:49 crc kubenswrapper[4705]: I0124 07:59:49.904552 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:50 crc kubenswrapper[4705]: I0124 07:59:50.184395 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:50 crc kubenswrapper[4705]: I0124 07:59:50.560800 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b","Type":"ContainerStarted","Data":"4f7a28d99e34f816b46e9953e10f0c82241d2be0fb24e7a4de0ee5022d382948"} Jan 24 07:59:50 crc kubenswrapper[4705]: I0124 07:59:50.562111 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:51 crc kubenswrapper[4705]: I0124 07:59:51.587075 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"95a51efd-0ac3-4c02-8052-5b4017444820","Type":"ContainerStarted","Data":"932612e1a1335f21f7b6a22ce07c8fbc0f9739f8f32dd2240c6fb336ccaf001e"} Jan 24 07:59:51 crc kubenswrapper[4705]: I0124 07:59:51.623849 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 24 07:59:51 crc kubenswrapper[4705]: I0124 07:59:51.900917 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vwcq9"] Jan 24 07:59:51 crc kubenswrapper[4705]: I0124 07:59:51.962577 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-fspw2"] Jan 24 07:59:51 crc kubenswrapper[4705]: I0124 07:59:51.964296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:51 crc kubenswrapper[4705]: I0124 07:59:51.998530 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.012810 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-fspw2"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.121544 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-config\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.122035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7jf\" (UniqueName: \"kubernetes.io/projected/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-kube-api-access-jk7jf\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.122123 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.122210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.185558 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hp2mc"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.186930 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.192760 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hp2mc"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.193111 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.355313 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.355811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.355870 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a80046d0-b499-49e8-98aa-78869a5f0482-ovn-rundir\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.355930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80046d0-b499-49e8-98aa-78869a5f0482-combined-ca-bundle\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.356121 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-config\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.356195 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a80046d0-b499-49e8-98aa-78869a5f0482-ovs-rundir\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.356223 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7jf\" (UniqueName: \"kubernetes.io/projected/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-kube-api-access-jk7jf\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.356262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a80046d0-b499-49e8-98aa-78869a5f0482-config\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.356286 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.356308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnps\" (UniqueName: \"kubernetes.io/projected/a80046d0-b499-49e8-98aa-78869a5f0482-kube-api-access-4hnps\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.356339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80046d0-b499-49e8-98aa-78869a5f0482-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.357041 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.357297 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-config\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.357778 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.373779 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-szr7x"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.396632 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7jf\" (UniqueName: \"kubernetes.io/projected/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-kube-api-access-jk7jf\") pod \"dnsmasq-dns-5bf47b49b7-fspw2\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.414515 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-g6gpt"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.415903 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.418668 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.419373 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.468318 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-g6gpt"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.469189 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a80046d0-b499-49e8-98aa-78869a5f0482-ovn-rundir\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.469249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80046d0-b499-49e8-98aa-78869a5f0482-combined-ca-bundle\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.469321 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a80046d0-b499-49e8-98aa-78869a5f0482-ovs-rundir\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.469351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a80046d0-b499-49e8-98aa-78869a5f0482-config\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.469370 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnps\" (UniqueName: \"kubernetes.io/projected/a80046d0-b499-49e8-98aa-78869a5f0482-kube-api-access-4hnps\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.469399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80046d0-b499-49e8-98aa-78869a5f0482-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.470145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a80046d0-b499-49e8-98aa-78869a5f0482-ovs-rundir\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.470204 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a80046d0-b499-49e8-98aa-78869a5f0482-ovn-rundir\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.470777 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a80046d0-b499-49e8-98aa-78869a5f0482-config\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.479804 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80046d0-b499-49e8-98aa-78869a5f0482-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.479840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80046d0-b499-49e8-98aa-78869a5f0482-combined-ca-bundle\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.490360 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.500588 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnps\" (UniqueName: \"kubernetes.io/projected/a80046d0-b499-49e8-98aa-78869a5f0482-kube-api-access-4hnps\") pod \"ovn-controller-metrics-hp2mc\" (UID: \"a80046d0-b499-49e8-98aa-78869a5f0482\") " pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.571552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-config\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.571588 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.571647 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fdd\" (UniqueName: \"kubernetes.io/projected/469c2416-21a1-4157-9896-3d54ff2c9d02-kube-api-access-x7fdd\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.571682 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.571708 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-dns-svc\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.646798 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.753957 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hp2mc" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.754743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7fdd\" (UniqueName: \"kubernetes.io/projected/469c2416-21a1-4157-9896-3d54ff2c9d02-kube-api-access-x7fdd\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.754797 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.754852 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-dns-svc\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.754983 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-config\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.755017 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.755982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.755997 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-dns-svc\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.756736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.756744 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-config\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.870678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7fdd\" (UniqueName: \"kubernetes.io/projected/469c2416-21a1-4157-9896-3d54ff2c9d02-kube-api-access-x7fdd\") pod \"dnsmasq-dns-8554648995-g6gpt\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.885934 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.887355 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.892282 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.892433 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.892537 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.892784 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jgvh5" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.894002 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.960127 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6dhd\" (UniqueName: \"kubernetes.io/projected/89e4ed86-cfcf-457e-bca5-29d0001a7785-kube-api-access-g6dhd\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.960189 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89e4ed86-cfcf-457e-bca5-29d0001a7785-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.960228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.960254 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e4ed86-cfcf-457e-bca5-29d0001a7785-scripts\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.960268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.960283 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.960303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e4ed86-cfcf-457e-bca5-29d0001a7785-config\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:52 crc kubenswrapper[4705]: I0124 07:59:52.962631 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.026664 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061216 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p528\" (UniqueName: \"kubernetes.io/projected/282f3533-7f86-4400-8533-480ce5bb9c55-kube-api-access-9p528\") pod \"282f3533-7f86-4400-8533-480ce5bb9c55\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061300 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-dns-svc\") pod \"282f3533-7f86-4400-8533-480ce5bb9c55\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061441 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-config\") pod \"282f3533-7f86-4400-8533-480ce5bb9c55\" (UID: \"282f3533-7f86-4400-8533-480ce5bb9c55\") " Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6dhd\" (UniqueName: \"kubernetes.io/projected/89e4ed86-cfcf-457e-bca5-29d0001a7785-kube-api-access-g6dhd\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061753 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89e4ed86-cfcf-457e-bca5-29d0001a7785-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061800 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061846 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e4ed86-cfcf-457e-bca5-29d0001a7785-scripts\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061873 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061896 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.061920 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e4ed86-cfcf-457e-bca5-29d0001a7785-config\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.062425 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-config" (OuterVolumeSpecName: "config") pod "282f3533-7f86-4400-8533-480ce5bb9c55" (UID: "282f3533-7f86-4400-8533-480ce5bb9c55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.062760 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "282f3533-7f86-4400-8533-480ce5bb9c55" (UID: "282f3533-7f86-4400-8533-480ce5bb9c55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.062869 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e4ed86-cfcf-457e-bca5-29d0001a7785-config\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.063566 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89e4ed86-cfcf-457e-bca5-29d0001a7785-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.063641 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89e4ed86-cfcf-457e-bca5-29d0001a7785-scripts\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.066063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282f3533-7f86-4400-8533-480ce5bb9c55-kube-api-access-9p528" (OuterVolumeSpecName: "kube-api-access-9p528") pod "282f3533-7f86-4400-8533-480ce5bb9c55" (UID: "282f3533-7f86-4400-8533-480ce5bb9c55"). InnerVolumeSpecName "kube-api-access-9p528". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.068988 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.071166 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.073579 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e4ed86-cfcf-457e-bca5-29d0001a7785-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.083346 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6dhd\" (UniqueName: \"kubernetes.io/projected/89e4ed86-cfcf-457e-bca5-29d0001a7785-kube-api-access-g6dhd\") pod \"ovn-northd-0\" (UID: \"89e4ed86-cfcf-457e-bca5-29d0001a7785\") " pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.147922 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.163022 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62h8z\" (UniqueName: \"kubernetes.io/projected/88e64eab-eb47-47b4-aae0-86bb27fab696-kube-api-access-62h8z\") pod \"88e64eab-eb47-47b4-aae0-86bb27fab696\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.163356 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-dns-svc\") pod \"88e64eab-eb47-47b4-aae0-86bb27fab696\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.163386 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-config\") pod \"88e64eab-eb47-47b4-aae0-86bb27fab696\" (UID: \"88e64eab-eb47-47b4-aae0-86bb27fab696\") " Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.163786 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.163799 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p528\" (UniqueName: \"kubernetes.io/projected/282f3533-7f86-4400-8533-480ce5bb9c55-kube-api-access-9p528\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.163968 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-config" (OuterVolumeSpecName: "config") pod "88e64eab-eb47-47b4-aae0-86bb27fab696" (UID: "88e64eab-eb47-47b4-aae0-86bb27fab696"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.163809 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282f3533-7f86-4400-8533-480ce5bb9c55-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.164050 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88e64eab-eb47-47b4-aae0-86bb27fab696" (UID: "88e64eab-eb47-47b4-aae0-86bb27fab696"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.168308 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e64eab-eb47-47b4-aae0-86bb27fab696-kube-api-access-62h8z" (OuterVolumeSpecName: "kube-api-access-62h8z") pod "88e64eab-eb47-47b4-aae0-86bb27fab696" (UID: "88e64eab-eb47-47b4-aae0-86bb27fab696"). InnerVolumeSpecName "kube-api-access-62h8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.228425 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.266159 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62h8z\" (UniqueName: \"kubernetes.io/projected/88e64eab-eb47-47b4-aae0-86bb27fab696-kube-api-access-62h8z\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.266190 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.266200 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e64eab-eb47-47b4-aae0-86bb27fab696-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.352067 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-fspw2"] Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.387968 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hp2mc"] Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.681702 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" event={"ID":"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3","Type":"ContainerStarted","Data":"dd74b573bb1decbd772e22b347e769895bc6443a233166a6ab2dc15ef29b8f72"} Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.700727 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-g6gpt"] Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.702175 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.705410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-szr7x" event={"ID":"282f3533-7f86-4400-8533-480ce5bb9c55","Type":"ContainerDied","Data":"bd35e75039d4f680e25b96ce7a172780b9746a39468d296c4a0bcc2badf582e2"} Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.731987 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hp2mc" event={"ID":"a80046d0-b499-49e8-98aa-78869a5f0482","Type":"ContainerStarted","Data":"a9503f60fcbe2b68eb924bac0901cb14b897031a6fe3bffb85f4258931287cf8"} Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.741378 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.743313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vwcq9" event={"ID":"88e64eab-eb47-47b4-aae0-86bb27fab696","Type":"ContainerDied","Data":"859f6090a573a138dff4630168a9b9def7d75ae179d891c142abd00f1710ec8e"} Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.758184 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:59:53 crc kubenswrapper[4705]: W0124 07:59:53.766641 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e4ed86_cfcf_457e_bca5_29d0001a7785.slice/crio-38a28fd737a3dc5e46dd96509fde81bf967aae35b347249d31b5a1e1c00b269c WatchSource:0}: Error finding container 38a28fd737a3dc5e46dd96509fde81bf967aae35b347249d31b5a1e1c00b269c: Status 404 returned error can't find the container with id 38a28fd737a3dc5e46dd96509fde81bf967aae35b347249d31b5a1e1c00b269c Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.821175 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vwcq9"] Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.828749 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vwcq9"] Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.859858 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-szr7x"] Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.867778 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-szr7x"] Jan 24 07:59:53 crc kubenswrapper[4705]: I0124 07:59:53.985274 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.756672 4705 generic.go:334] "Generic (PLEG): container finished" podID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerID="8b484a96b03fe91ab06f8b9790d053ab565d54d01778d2ba363233e517a6a125" exitCode=0 Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.756837 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" event={"ID":"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3","Type":"ContainerDied","Data":"8b484a96b03fe91ab06f8b9790d053ab565d54d01778d2ba363233e517a6a125"} Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.759135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hp2mc" event={"ID":"a80046d0-b499-49e8-98aa-78869a5f0482","Type":"ContainerStarted","Data":"ea9b975fc07e969c45983db7dd0ba406a2210fa3fe2187780707d6927e93973f"} Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.762627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89e4ed86-cfcf-457e-bca5-29d0001a7785","Type":"ContainerStarted","Data":"38a28fd737a3dc5e46dd96509fde81bf967aae35b347249d31b5a1e1c00b269c"} Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.765000 4705 generic.go:334] "Generic (PLEG): container finished" podID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerID="8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5" exitCode=0 Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.765167 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-g6gpt" event={"ID":"469c2416-21a1-4157-9896-3d54ff2c9d02","Type":"ContainerDied","Data":"8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5"} Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.765260 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-g6gpt" event={"ID":"469c2416-21a1-4157-9896-3d54ff2c9d02","Type":"ContainerStarted","Data":"4eaf383e99df3aa322828810250c6a88413292ee7942444ee085b2905ab233ec"} Jan 24 07:59:54 crc kubenswrapper[4705]: I0124 07:59:54.826163 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hp2mc" podStartSLOduration=2.826147604 podStartE2EDuration="2.826147604s" podCreationTimestamp="2026-01-24 07:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:59:54.825260569 +0000 UTC m=+1133.545133857" watchObservedRunningTime="2026-01-24 07:59:54.826147604 +0000 UTC m=+1133.546020892" Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.585632 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282f3533-7f86-4400-8533-480ce5bb9c55" path="/var/lib/kubelet/pods/282f3533-7f86-4400-8533-480ce5bb9c55/volumes" Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.586111 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e64eab-eb47-47b4-aae0-86bb27fab696" path="/var/lib/kubelet/pods/88e64eab-eb47-47b4-aae0-86bb27fab696/volumes" Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.774297 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89e4ed86-cfcf-457e-bca5-29d0001a7785","Type":"ContainerStarted","Data":"644997029e93da1289a5d15d7d8792d1c5eea0a6df2c070ad76f7891fae1b8a5"} Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.776736 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-g6gpt" event={"ID":"469c2416-21a1-4157-9896-3d54ff2c9d02","Type":"ContainerStarted","Data":"856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02"} Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.777093 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.780126 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" event={"ID":"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3","Type":"ContainerStarted","Data":"37f8abb9d8d2758be3752fa0d709dc7b2ed82eea6d2eee47f1d5656e8e8999a2"} Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.780167 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.781059 4705 generic.go:334] "Generic (PLEG): container finished" podID="9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b" containerID="4f7a28d99e34f816b46e9953e10f0c82241d2be0fb24e7a4de0ee5022d382948" exitCode=0 Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.781215 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b","Type":"ContainerDied","Data":"4f7a28d99e34f816b46e9953e10f0c82241d2be0fb24e7a4de0ee5022d382948"} Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.806578 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-g6gpt" podStartSLOduration=3.392052044 podStartE2EDuration="3.8065625s" podCreationTimestamp="2026-01-24 07:59:52 +0000 UTC" firstStartedPulling="2026-01-24 07:59:53.750140395 +0000 UTC m=+1132.470013693" lastFinishedPulling="2026-01-24 07:59:54.164650861 +0000 UTC m=+1132.884524149" observedRunningTime="2026-01-24 07:59:55.796059874 +0000 UTC m=+1134.515933162" watchObservedRunningTime="2026-01-24 07:59:55.8065625 +0000 UTC m=+1134.526435788" Jan 24 07:59:55 crc kubenswrapper[4705]: I0124 07:59:55.822179 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" podStartSLOduration=4.278039534 podStartE2EDuration="4.82216085s" podCreationTimestamp="2026-01-24 07:59:51 +0000 UTC" firstStartedPulling="2026-01-24 07:59:53.37005559 +0000 UTC m=+1132.089928878" lastFinishedPulling="2026-01-24 07:59:53.914176906 +0000 UTC m=+1132.634050194" observedRunningTime="2026-01-24 07:59:55.81685256 +0000 UTC m=+1134.536725848" watchObservedRunningTime="2026-01-24 07:59:55.82216085 +0000 UTC m=+1134.542034138" Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.767393 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.807565 4705 generic.go:334] "Generic (PLEG): container finished" podID="95a51efd-0ac3-4c02-8052-5b4017444820" containerID="932612e1a1335f21f7b6a22ce07c8fbc0f9739f8f32dd2240c6fb336ccaf001e" exitCode=0 Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.807671 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"95a51efd-0ac3-4c02-8052-5b4017444820","Type":"ContainerDied","Data":"932612e1a1335f21f7b6a22ce07c8fbc0f9739f8f32dd2240c6fb336ccaf001e"} Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.812236 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b","Type":"ContainerStarted","Data":"20f4b47febce2061bce5de45850ab3ee1e57cbeed8465dfd475b599818792565"} Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.823892 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89e4ed86-cfcf-457e-bca5-29d0001a7785","Type":"ContainerStarted","Data":"4b423036005e303b4a6e61ccac43589de35baa52681e58c877b4e267e6a93970"} Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.824163 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.865398 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.280990575 podStartE2EDuration="4.865372174s" podCreationTimestamp="2026-01-24 07:59:52 +0000 UTC" firstStartedPulling="2026-01-24 07:59:53.77196769 +0000 UTC m=+1132.491840978" lastFinishedPulling="2026-01-24 07:59:55.356349289 +0000 UTC m=+1134.076222577" observedRunningTime="2026-01-24 07:59:56.860479327 +0000 UTC m=+1135.580352625" watchObservedRunningTime="2026-01-24 07:59:56.865372174 +0000 UTC m=+1135.585245462" Jan 24 07:59:56 crc kubenswrapper[4705]: I0124 07:59:56.915479 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.618942371 podStartE2EDuration="57.915444915s" podCreationTimestamp="2026-01-24 07:58:59 +0000 UTC" firstStartedPulling="2026-01-24 07:59:02.755461113 +0000 UTC m=+1081.475334401" lastFinishedPulling="2026-01-24 07:59:50.051963657 +0000 UTC m=+1128.771836945" observedRunningTime="2026-01-24 07:59:56.910182147 +0000 UTC m=+1135.630055435" watchObservedRunningTime="2026-01-24 07:59:56.915444915 +0000 UTC m=+1135.635318193" Jan 24 07:59:57 crc kubenswrapper[4705]: I0124 07:59:57.830299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"95a51efd-0ac3-4c02-8052-5b4017444820","Type":"ContainerStarted","Data":"f0ab54a45ff332537aca836b1c79e476e795855dba3811d00fd91e6a40569828"} Jan 24 07:59:57 crc kubenswrapper[4705]: I0124 07:59:57.858300 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371976.996496 podStartE2EDuration="59.858280481s" podCreationTimestamp="2026-01-24 07:58:58 +0000 UTC" firstStartedPulling="2026-01-24 07:59:01.556779568 +0000 UTC m=+1080.276652856" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:59:57.851956133 +0000 UTC m=+1136.571829431" watchObservedRunningTime="2026-01-24 07:59:57.858280481 +0000 UTC m=+1136.578153769" Jan 24 07:59:59 crc kubenswrapper[4705]: I0124 07:59:59.637324 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 24 07:59:59 crc kubenswrapper[4705]: I0124 07:59:59.637391 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.144745 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r"] Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.146565 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.148856 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.149046 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.152984 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r"] Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.315287 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a0b314-0224-465a-9302-2ec3f4cdaf02-secret-volume\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.315564 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a0b314-0224-465a-9302-2ec3f4cdaf02-config-volume\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.315681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclgq\" (UniqueName: \"kubernetes.io/projected/27a0b314-0224-465a-9302-2ec3f4cdaf02-kube-api-access-fclgq\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.555550 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a0b314-0224-465a-9302-2ec3f4cdaf02-secret-volume\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.555607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a0b314-0224-465a-9302-2ec3f4cdaf02-config-volume\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.555635 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fclgq\" (UniqueName: \"kubernetes.io/projected/27a0b314-0224-465a-9302-2ec3f4cdaf02-kube-api-access-fclgq\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.560422 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a0b314-0224-465a-9302-2ec3f4cdaf02-config-volume\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.562743 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a0b314-0224-465a-9302-2ec3f4cdaf02-secret-volume\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.575438 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fclgq\" (UniqueName: \"kubernetes.io/projected/27a0b314-0224-465a-9302-2ec3f4cdaf02-kube-api-access-fclgq\") pod \"collect-profiles-29487360-vbc6r\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.778120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.930126 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 24 08:00:00 crc kubenswrapper[4705]: I0124 08:00:00.930170 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 24 08:00:01 crc kubenswrapper[4705]: I0124 08:00:01.016735 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 24 08:00:01 crc kubenswrapper[4705]: I0124 08:00:01.798597 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r"] Jan 24 08:00:01 crc kubenswrapper[4705]: I0124 08:00:01.871910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" event={"ID":"27a0b314-0224-465a-9302-2ec3f4cdaf02","Type":"ContainerStarted","Data":"d8a5732d585ac9b2b7cd178181f9c463133c56c941e6962c6d6466a1fcfc2862"} Jan 24 08:00:01 crc kubenswrapper[4705]: I0124 08:00:01.961536 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 24 08:00:02 crc kubenswrapper[4705]: I0124 08:00:02.536953 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.238453 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.294241 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-fspw2"] Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.295792 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerName="dnsmasq-dns" containerID="cri-o://37f8abb9d8d2758be3752fa0d709dc7b2ed82eea6d2eee47f1d5656e8e8999a2" gracePeriod=10 Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.589525 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-vckb7"] Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.591620 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.604723 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-vckb7"] Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.748273 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.748340 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddwcz\" (UniqueName: \"kubernetes.io/projected/e1fa4568-6ba7-4897-9076-b1778b317348-kube-api-access-ddwcz\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.748451 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.748540 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.748633 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-config\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.884351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.884437 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.884493 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-config\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.884564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.884591 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddwcz\" (UniqueName: \"kubernetes.io/projected/e1fa4568-6ba7-4897-9076-b1778b317348-kube-api-access-ddwcz\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.894130 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.898354 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.898605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.899175 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-config\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.906616 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddwcz\" (UniqueName: \"kubernetes.io/projected/e1fa4568-6ba7-4897-9076-b1778b317348-kube-api-access-ddwcz\") pod \"dnsmasq-dns-b8fbc5445-vckb7\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:03 crc kubenswrapper[4705]: I0124 08:00:03.924029 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:04 crc kubenswrapper[4705]: I0124 08:00:04.449791 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-vckb7"] Jan 24 08:00:04 crc kubenswrapper[4705]: W0124 08:00:04.455180 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1fa4568_6ba7_4897_9076_b1778b317348.slice/crio-8834781682d635ae73b2b725eb02d733a05f372e571ddcb838d4af1d91fe2a2b WatchSource:0}: Error finding container 8834781682d635ae73b2b725eb02d733a05f372e571ddcb838d4af1d91fe2a2b: Status 404 returned error can't find the container with id 8834781682d635ae73b2b725eb02d733a05f372e571ddcb838d4af1d91fe2a2b Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.344642 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" event={"ID":"e1fa4568-6ba7-4897-9076-b1778b317348","Type":"ContainerStarted","Data":"8834781682d635ae73b2b725eb02d733a05f372e571ddcb838d4af1d91fe2a2b"} Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.364541 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.366432 4705 generic.go:334] "Generic (PLEG): container finished" podID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerID="37f8abb9d8d2758be3752fa0d709dc7b2ed82eea6d2eee47f1d5656e8e8999a2" exitCode=0 Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.379727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" event={"ID":"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3","Type":"ContainerDied","Data":"37f8abb9d8d2758be3752fa0d709dc7b2ed82eea6d2eee47f1d5656e8e8999a2"} Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.379798 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.379932 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.383059 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.383925 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.384952 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.385070 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qdpl4" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.498917 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.499868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2521bbad-8785-4fbf-94fe-7309e9fe3442-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.500001 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2521bbad-8785-4fbf-94fe-7309e9fe3442-lock\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.500136 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzhz\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-kube-api-access-xmzhz\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.500168 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2521bbad-8785-4fbf-94fe-7309e9fe3442-cache\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.500212 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.602161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.602256 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.602305 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2521bbad-8785-4fbf-94fe-7309e9fe3442-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.602323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2521bbad-8785-4fbf-94fe-7309e9fe3442-lock\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.602376 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzhz\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-kube-api-access-xmzhz\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: E0124 08:00:05.602379 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 08:00:05 crc kubenswrapper[4705]: E0124 08:00:05.602406 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 08:00:05 crc kubenswrapper[4705]: E0124 08:00:05.602457 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift podName:2521bbad-8785-4fbf-94fe-7309e9fe3442 nodeName:}" failed. No retries permitted until 2026-01-24 08:00:06.102439408 +0000 UTC m=+1144.822312696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift") pod "swift-storage-0" (UID: "2521bbad-8785-4fbf-94fe-7309e9fe3442") : configmap "swift-ring-files" not found Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.602394 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2521bbad-8785-4fbf-94fe-7309e9fe3442-cache\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.602770 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2521bbad-8785-4fbf-94fe-7309e9fe3442-cache\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.603106 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.603259 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2521bbad-8785-4fbf-94fe-7309e9fe3442-lock\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.616568 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2521bbad-8785-4fbf-94fe-7309e9fe3442-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.620102 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzhz\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-kube-api-access-xmzhz\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:05 crc kubenswrapper[4705]: I0124 08:00:05.624757 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:06 crc kubenswrapper[4705]: I0124 08:00:06.168585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:06 crc kubenswrapper[4705]: E0124 08:00:06.168865 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 08:00:06 crc kubenswrapper[4705]: E0124 08:00:06.168899 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 08:00:06 crc kubenswrapper[4705]: E0124 08:00:06.168967 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift podName:2521bbad-8785-4fbf-94fe-7309e9fe3442 nodeName:}" failed. No retries permitted until 2026-01-24 08:00:07.168946632 +0000 UTC m=+1145.888819920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift") pod "swift-storage-0" (UID: "2521bbad-8785-4fbf-94fe-7309e9fe3442") : configmap "swift-ring-files" not found Jan 24 08:00:07 crc kubenswrapper[4705]: I0124 08:00:07.071556 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:00:07 crc kubenswrapper[4705]: I0124 08:00:07.071620 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:00:07 crc kubenswrapper[4705]: I0124 08:00:07.186257 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:07 crc kubenswrapper[4705]: E0124 08:00:07.186531 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 08:00:07 crc kubenswrapper[4705]: E0124 08:00:07.186891 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 08:00:07 crc kubenswrapper[4705]: E0124 08:00:07.186964 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift podName:2521bbad-8785-4fbf-94fe-7309e9fe3442 nodeName:}" failed. No retries permitted until 2026-01-24 08:00:09.186943458 +0000 UTC m=+1147.906816746 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift") pod "swift-storage-0" (UID: "2521bbad-8785-4fbf-94fe-7309e9fe3442") : configmap "swift-ring-files" not found Jan 24 08:00:07 crc kubenswrapper[4705]: I0124 08:00:07.420761 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.371699 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.389370 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" event={"ID":"27a0b314-0224-465a-9302-2ec3f4cdaf02","Type":"ContainerStarted","Data":"7509e55629a5479a8beb8376c5e43421699338a127532980bb6be6ffd5472aa1"} Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.691035 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wvgxh"] Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.692412 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.696080 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.696707 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.696884 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.703672 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wvgxh"] Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.765481 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-etc-swift\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.765570 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-swiftconf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.765646 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-scripts\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.765685 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-ring-data-devices\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.765735 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-dispersionconf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.765772 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcbf\" (UniqueName: \"kubernetes.io/projected/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-kube-api-access-xhcbf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.765807 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-combined-ca-bundle\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.828458 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.870303 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcbf\" (UniqueName: \"kubernetes.io/projected/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-kube-api-access-xhcbf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.870369 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-combined-ca-bundle\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.870421 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-etc-swift\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.870467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-swiftconf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.870531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-scripts\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.870569 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-ring-data-devices\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.870609 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-dispersionconf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.871374 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-etc-swift\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.872524 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-scripts\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.872512 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-ring-data-devices\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.878621 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-dispersionconf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.888321 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-combined-ca-bundle\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.888896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-swiftconf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.890133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcbf\" (UniqueName: \"kubernetes.io/projected/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-kube-api-access-xhcbf\") pod \"swift-ring-rebalance-wvgxh\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.971781 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-ovsdbserver-nb\") pod \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.971928 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-config\") pod \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.972011 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-dns-svc\") pod \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.972047 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7jf\" (UniqueName: \"kubernetes.io/projected/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-kube-api-access-jk7jf\") pod \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\" (UID: \"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3\") " Jan 24 08:00:08 crc kubenswrapper[4705]: I0124 08:00:08.975735 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-kube-api-access-jk7jf" (OuterVolumeSpecName: "kube-api-access-jk7jf") pod "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" (UID: "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3"). InnerVolumeSpecName "kube-api-access-jk7jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.010366 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" (UID: "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.010377 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" (UID: "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.011465 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-config" (OuterVolumeSpecName: "config") pod "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" (UID: "fe0e2464-6e0e-4a44-9a30-6e17b8990fa3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.075316 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.075355 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.075366 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.075377 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk7jf\" (UniqueName: \"kubernetes.io/projected/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3-kube-api-access-jk7jf\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.126251 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.279076 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:09 crc kubenswrapper[4705]: E0124 08:00:09.279301 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 08:00:09 crc kubenswrapper[4705]: E0124 08:00:09.279484 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 08:00:09 crc kubenswrapper[4705]: E0124 08:00:09.279552 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift podName:2521bbad-8785-4fbf-94fe-7309e9fe3442 nodeName:}" failed. No retries permitted until 2026-01-24 08:00:13.279531461 +0000 UTC m=+1151.999404749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift") pod "swift-storage-0" (UID: "2521bbad-8785-4fbf-94fe-7309e9fe3442") : configmap "swift-ring-files" not found Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.398491 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1fa4568-6ba7-4897-9076-b1778b317348" containerID="bff9c576de78c517853d33fba2abec2766aaeb984a42555cde3faa1f32370a73" exitCode=0 Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.398617 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" event={"ID":"e1fa4568-6ba7-4897-9076-b1778b317348","Type":"ContainerDied","Data":"bff9c576de78c517853d33fba2abec2766aaeb984a42555cde3faa1f32370a73"} Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.404859 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" event={"ID":"fe0e2464-6e0e-4a44-9a30-6e17b8990fa3","Type":"ContainerDied","Data":"dd74b573bb1decbd772e22b347e769895bc6443a233166a6ab2dc15ef29b8f72"} Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.404908 4705 scope.go:117] "RemoveContainer" containerID="37f8abb9d8d2758be3752fa0d709dc7b2ed82eea6d2eee47f1d5656e8e8999a2" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.404912 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-fspw2" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.407738 4705 generic.go:334] "Generic (PLEG): container finished" podID="27a0b314-0224-465a-9302-2ec3f4cdaf02" containerID="7509e55629a5479a8beb8376c5e43421699338a127532980bb6be6ffd5472aa1" exitCode=0 Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.408032 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" event={"ID":"27a0b314-0224-465a-9302-2ec3f4cdaf02","Type":"ContainerDied","Data":"7509e55629a5479a8beb8376c5e43421699338a127532980bb6be6ffd5472aa1"} Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.442359 4705 scope.go:117] "RemoveContainer" containerID="8b484a96b03fe91ab06f8b9790d053ab565d54d01778d2ba363233e517a6a125" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.467991 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-fspw2"] Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.479663 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-fspw2"] Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.497073 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4qgnt"] Jan 24 08:00:09 crc kubenswrapper[4705]: E0124 08:00:09.497529 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerName="init" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.497554 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerName="init" Jan 24 08:00:09 crc kubenswrapper[4705]: E0124 08:00:09.497589 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerName="dnsmasq-dns" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.497597 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerName="dnsmasq-dns" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.498237 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" containerName="dnsmasq-dns" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.498983 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.506243 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.513982 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.526476 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4qgnt"] Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.572170 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wvgxh"] Jan 24 08:00:09 crc kubenswrapper[4705]: W0124 08:00:09.574923 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71ae95bb_0592_4ebd_b74a_c2ed2cc5654e.slice/crio-94a0edcc855fc4c73c482318af2f65219dac1105d25572fc1300e0d1135fe332 WatchSource:0}: Error finding container 94a0edcc855fc4c73c482318af2f65219dac1105d25572fc1300e0d1135fe332: Status 404 returned error can't find the container with id 94a0edcc855fc4c73c482318af2f65219dac1105d25572fc1300e0d1135fe332 Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.584554 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vlx\" (UniqueName: \"kubernetes.io/projected/1f340c15-be68-45e8-b217-7770f329ea7e-kube-api-access-m8vlx\") pod \"root-account-create-update-4qgnt\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.584601 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f340c15-be68-45e8-b217-7770f329ea7e-operator-scripts\") pod \"root-account-create-update-4qgnt\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.595378 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0e2464-6e0e-4a44-9a30-6e17b8990fa3" path="/var/lib/kubelet/pods/fe0e2464-6e0e-4a44-9a30-6e17b8990fa3/volumes" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.619454 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.686070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vlx\" (UniqueName: \"kubernetes.io/projected/1f340c15-be68-45e8-b217-7770f329ea7e-kube-api-access-m8vlx\") pod \"root-account-create-update-4qgnt\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.686148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f340c15-be68-45e8-b217-7770f329ea7e-operator-scripts\") pod \"root-account-create-update-4qgnt\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.687090 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f340c15-be68-45e8-b217-7770f329ea7e-operator-scripts\") pod \"root-account-create-update-4qgnt\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.709655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vlx\" (UniqueName: \"kubernetes.io/projected/1f340c15-be68-45e8-b217-7770f329ea7e-kube-api-access-m8vlx\") pod \"root-account-create-update-4qgnt\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:09 crc kubenswrapper[4705]: I0124 08:00:09.825426 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.273189 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4qgnt"] Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.419109 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wvgxh" event={"ID":"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e","Type":"ContainerStarted","Data":"94a0edcc855fc4c73c482318af2f65219dac1105d25572fc1300e0d1135fe332"} Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.423096 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" event={"ID":"e1fa4568-6ba7-4897-9076-b1778b317348","Type":"ContainerStarted","Data":"7d9cd277b55c747795c0bf203164ead2df49ba401605c4ca2f1919f3e864fb3f"} Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.423851 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.425670 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qgnt" event={"ID":"1f340c15-be68-45e8-b217-7770f329ea7e","Type":"ContainerStarted","Data":"99063596b6297ea4b4fa9255d1a506e8536fe8771c8ab787b8cfd6d75af58861"} Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.476439 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podStartSLOduration=7.47305511 podStartE2EDuration="7.47305511s" podCreationTimestamp="2026-01-24 08:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:00:10.446502452 +0000 UTC m=+1149.166375740" watchObservedRunningTime="2026-01-24 08:00:10.47305511 +0000 UTC m=+1149.192928398" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.852255 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-z5khx"] Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.853283 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.881418 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z5khx"] Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.886387 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.986064 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4f5d-account-create-update-897h6"] Jan 24 08:00:10 crc kubenswrapper[4705]: E0124 08:00:10.986420 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a0b314-0224-465a-9302-2ec3f4cdaf02" containerName="collect-profiles" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.986437 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a0b314-0224-465a-9302-2ec3f4cdaf02" containerName="collect-profiles" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.986604 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a0b314-0224-465a-9302-2ec3f4cdaf02" containerName="collect-profiles" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.987175 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:10 crc kubenswrapper[4705]: I0124 08:00:10.989166 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.000450 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4f5d-account-create-update-897h6"] Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.045482 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a0b314-0224-465a-9302-2ec3f4cdaf02-secret-volume\") pod \"27a0b314-0224-465a-9302-2ec3f4cdaf02\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.045606 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a0b314-0224-465a-9302-2ec3f4cdaf02-config-volume\") pod \"27a0b314-0224-465a-9302-2ec3f4cdaf02\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.045673 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fclgq\" (UniqueName: \"kubernetes.io/projected/27a0b314-0224-465a-9302-2ec3f4cdaf02-kube-api-access-fclgq\") pod \"27a0b314-0224-465a-9302-2ec3f4cdaf02\" (UID: \"27a0b314-0224-465a-9302-2ec3f4cdaf02\") " Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.045971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af3a2337-3d3f-4892-a825-5e88cc3cf834-operator-scripts\") pod \"keystone-db-create-z5khx\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.046047 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2kg5\" (UniqueName: \"kubernetes.io/projected/af3a2337-3d3f-4892-a825-5e88cc3cf834-kube-api-access-r2kg5\") pod \"keystone-db-create-z5khx\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.046169 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a0b314-0224-465a-9302-2ec3f4cdaf02-config-volume" (OuterVolumeSpecName: "config-volume") pod "27a0b314-0224-465a-9302-2ec3f4cdaf02" (UID: "27a0b314-0224-465a-9302-2ec3f4cdaf02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.055211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a0b314-0224-465a-9302-2ec3f4cdaf02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27a0b314-0224-465a-9302-2ec3f4cdaf02" (UID: "27a0b314-0224-465a-9302-2ec3f4cdaf02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.058976 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a0b314-0224-465a-9302-2ec3f4cdaf02-kube-api-access-fclgq" (OuterVolumeSpecName: "kube-api-access-fclgq") pod "27a0b314-0224-465a-9302-2ec3f4cdaf02" (UID: "27a0b314-0224-465a-9302-2ec3f4cdaf02"). InnerVolumeSpecName "kube-api-access-fclgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.100231 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-j2ld4"] Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.106015 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.130484 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j2ld4"] Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.147357 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af3a2337-3d3f-4892-a825-5e88cc3cf834-operator-scripts\") pod \"keystone-db-create-z5khx\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.147953 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v95mn\" (UniqueName: \"kubernetes.io/projected/e9e9f514-b21b-430c-a352-124dbb196d6d-kube-api-access-v95mn\") pod \"keystone-4f5d-account-create-update-897h6\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.147996 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2kg5\" (UniqueName: \"kubernetes.io/projected/af3a2337-3d3f-4892-a825-5e88cc3cf834-kube-api-access-r2kg5\") pod \"keystone-db-create-z5khx\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.148078 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e9f514-b21b-430c-a352-124dbb196d6d-operator-scripts\") pod \"keystone-4f5d-account-create-update-897h6\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.148243 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a0b314-0224-465a-9302-2ec3f4cdaf02-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.148261 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a0b314-0224-465a-9302-2ec3f4cdaf02-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.148273 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fclgq\" (UniqueName: \"kubernetes.io/projected/27a0b314-0224-465a-9302-2ec3f4cdaf02-kube-api-access-fclgq\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.148511 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af3a2337-3d3f-4892-a825-5e88cc3cf834-operator-scripts\") pod \"keystone-db-create-z5khx\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.166806 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2kg5\" (UniqueName: \"kubernetes.io/projected/af3a2337-3d3f-4892-a825-5e88cc3cf834-kube-api-access-r2kg5\") pod \"keystone-db-create-z5khx\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.179280 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1362-account-create-update-zfxhz"] Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.180757 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.187357 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.187982 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.192889 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1362-account-create-update-zfxhz"] Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.250377 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb424\" (UniqueName: \"kubernetes.io/projected/2084db85-248b-4371-86f6-4ff38216e099-kube-api-access-cb424\") pod \"placement-db-create-j2ld4\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.250421 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2084db85-248b-4371-86f6-4ff38216e099-operator-scripts\") pod \"placement-db-create-j2ld4\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.250704 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v95mn\" (UniqueName: \"kubernetes.io/projected/e9e9f514-b21b-430c-a352-124dbb196d6d-kube-api-access-v95mn\") pod \"keystone-4f5d-account-create-update-897h6\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.250877 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e9f514-b21b-430c-a352-124dbb196d6d-operator-scripts\") pod \"keystone-4f5d-account-create-update-897h6\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.251868 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e9f514-b21b-430c-a352-124dbb196d6d-operator-scripts\") pod \"keystone-4f5d-account-create-update-897h6\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.279945 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v95mn\" (UniqueName: \"kubernetes.io/projected/e9e9f514-b21b-430c-a352-124dbb196d6d-kube-api-access-v95mn\") pod \"keystone-4f5d-account-create-update-897h6\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.352933 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874c88aa-a889-442c-92ea-1bdfe2a23761-operator-scripts\") pod \"placement-1362-account-create-update-zfxhz\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.353001 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-596xm\" (UniqueName: \"kubernetes.io/projected/874c88aa-a889-442c-92ea-1bdfe2a23761-kube-api-access-596xm\") pod \"placement-1362-account-create-update-zfxhz\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.353332 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb424\" (UniqueName: \"kubernetes.io/projected/2084db85-248b-4371-86f6-4ff38216e099-kube-api-access-cb424\") pod \"placement-db-create-j2ld4\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.353391 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2084db85-248b-4371-86f6-4ff38216e099-operator-scripts\") pod \"placement-db-create-j2ld4\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.354332 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2084db85-248b-4371-86f6-4ff38216e099-operator-scripts\") pod \"placement-db-create-j2ld4\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.374391 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb424\" (UniqueName: \"kubernetes.io/projected/2084db85-248b-4371-86f6-4ff38216e099-kube-api-access-cb424\") pod \"placement-db-create-j2ld4\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.429675 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.435744 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f340c15-be68-45e8-b217-7770f329ea7e" containerID="fb02679e21e1809eb5bdf67d85555d45c326c2eb881bba8e40b7ecc51023af4d" exitCode=0 Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.435852 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qgnt" event={"ID":"1f340c15-be68-45e8-b217-7770f329ea7e","Type":"ContainerDied","Data":"fb02679e21e1809eb5bdf67d85555d45c326c2eb881bba8e40b7ecc51023af4d"} Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.440804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.447897 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.448998 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r" event={"ID":"27a0b314-0224-465a-9302-2ec3f4cdaf02","Type":"ContainerDied","Data":"d8a5732d585ac9b2b7cd178181f9c463133c56c941e6962c6d6466a1fcfc2862"} Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.449037 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a5732d585ac9b2b7cd178181f9c463133c56c941e6962c6d6466a1fcfc2862" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.461040 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874c88aa-a889-442c-92ea-1bdfe2a23761-operator-scripts\") pod \"placement-1362-account-create-update-zfxhz\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.461113 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-596xm\" (UniqueName: \"kubernetes.io/projected/874c88aa-a889-442c-92ea-1bdfe2a23761-kube-api-access-596xm\") pod \"placement-1362-account-create-update-zfxhz\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.462235 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874c88aa-a889-442c-92ea-1bdfe2a23761-operator-scripts\") pod \"placement-1362-account-create-update-zfxhz\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.490059 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-596xm\" (UniqueName: \"kubernetes.io/projected/874c88aa-a889-442c-92ea-1bdfe2a23761-kube-api-access-596xm\") pod \"placement-1362-account-create-update-zfxhz\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:11 crc kubenswrapper[4705]: I0124 08:00:11.505363 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.301695 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:13 crc kubenswrapper[4705]: E0124 08:00:13.301905 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 08:00:13 crc kubenswrapper[4705]: E0124 08:00:13.302285 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 08:00:13 crc kubenswrapper[4705]: E0124 08:00:13.302338 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift podName:2521bbad-8785-4fbf-94fe-7309e9fe3442 nodeName:}" failed. No retries permitted until 2026-01-24 08:00:21.302321294 +0000 UTC m=+1160.022194582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift") pod "swift-storage-0" (UID: "2521bbad-8785-4fbf-94fe-7309e9fe3442") : configmap "swift-ring-files" not found Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.711815 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.813644 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8vlx\" (UniqueName: \"kubernetes.io/projected/1f340c15-be68-45e8-b217-7770f329ea7e-kube-api-access-m8vlx\") pod \"1f340c15-be68-45e8-b217-7770f329ea7e\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.813684 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f340c15-be68-45e8-b217-7770f329ea7e-operator-scripts\") pod \"1f340c15-be68-45e8-b217-7770f329ea7e\" (UID: \"1f340c15-be68-45e8-b217-7770f329ea7e\") " Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.814763 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f340c15-be68-45e8-b217-7770f329ea7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f340c15-be68-45e8-b217-7770f329ea7e" (UID: "1f340c15-be68-45e8-b217-7770f329ea7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.817718 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f340c15-be68-45e8-b217-7770f329ea7e-kube-api-access-m8vlx" (OuterVolumeSpecName: "kube-api-access-m8vlx") pod "1f340c15-be68-45e8-b217-7770f329ea7e" (UID: "1f340c15-be68-45e8-b217-7770f329ea7e"). InnerVolumeSpecName "kube-api-access-m8vlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.915905 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8vlx\" (UniqueName: \"kubernetes.io/projected/1f340c15-be68-45e8-b217-7770f329ea7e-kube-api-access-m8vlx\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:13 crc kubenswrapper[4705]: I0124 08:00:13.916186 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f340c15-be68-45e8-b217-7770f329ea7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.247445 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j2ld4"] Jan 24 08:00:14 crc kubenswrapper[4705]: W0124 08:00:14.373492 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf3a2337_3d3f_4892_a825_5e88cc3cf834.slice/crio-09e0b7057c51ff498a4dc3782f748d61ac404b18e0e8c8693f69c1625e72b709 WatchSource:0}: Error finding container 09e0b7057c51ff498a4dc3782f748d61ac404b18e0e8c8693f69c1625e72b709: Status 404 returned error can't find the container with id 09e0b7057c51ff498a4dc3782f748d61ac404b18e0e8c8693f69c1625e72b709 Jan 24 08:00:14 crc kubenswrapper[4705]: W0124 08:00:14.374463 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9e9f514_b21b_430c_a352_124dbb196d6d.slice/crio-93220de5d09281b6f9a5ba6ee847c46c3e41bd206c91ea086d171f2dfdb8ebf6 WatchSource:0}: Error finding container 93220de5d09281b6f9a5ba6ee847c46c3e41bd206c91ea086d171f2dfdb8ebf6: Status 404 returned error can't find the container with id 93220de5d09281b6f9a5ba6ee847c46c3e41bd206c91ea086d171f2dfdb8ebf6 Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.377672 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z5khx"] Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.385974 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4f5d-account-create-update-897h6"] Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.473369 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1362-account-create-update-zfxhz"] Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.473813 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qgnt" event={"ID":"1f340c15-be68-45e8-b217-7770f329ea7e","Type":"ContainerDied","Data":"99063596b6297ea4b4fa9255d1a506e8536fe8771c8ab787b8cfd6d75af58861"} Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.473875 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99063596b6297ea4b4fa9255d1a506e8536fe8771c8ab787b8cfd6d75af58861" Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.473941 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qgnt" Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.485706 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z5khx" event={"ID":"af3a2337-3d3f-4892-a825-5e88cc3cf834","Type":"ContainerStarted","Data":"09e0b7057c51ff498a4dc3782f748d61ac404b18e0e8c8693f69c1625e72b709"} Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.494459 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wvgxh" event={"ID":"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e","Type":"ContainerStarted","Data":"b68a86bb6be9a59483119d97082afbc5e87f3ee6868190f554fe8bc0d9a9970c"} Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.496373 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f5d-account-create-update-897h6" event={"ID":"e9e9f514-b21b-430c-a352-124dbb196d6d","Type":"ContainerStarted","Data":"93220de5d09281b6f9a5ba6ee847c46c3e41bd206c91ea086d171f2dfdb8ebf6"} Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.499538 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2ld4" event={"ID":"2084db85-248b-4371-86f6-4ff38216e099","Type":"ContainerStarted","Data":"58d6e93cdbd28215cee24b92af0f059cd38c8b92f55bd03048eded63f361eaba"} Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.499570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2ld4" event={"ID":"2084db85-248b-4371-86f6-4ff38216e099","Type":"ContainerStarted","Data":"89c36361d9b0f84807aebc064a6433b28cdbf9724df5a8d199c6874204b888ec"} Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.516510 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wvgxh" podStartSLOduration=2.295263602 podStartE2EDuration="6.516497224s" podCreationTimestamp="2026-01-24 08:00:08 +0000 UTC" firstStartedPulling="2026-01-24 08:00:09.577131164 +0000 UTC m=+1148.297004452" lastFinishedPulling="2026-01-24 08:00:13.798364786 +0000 UTC m=+1152.518238074" observedRunningTime="2026-01-24 08:00:14.513466449 +0000 UTC m=+1153.233339747" watchObservedRunningTime="2026-01-24 08:00:14.516497224 +0000 UTC m=+1153.236370512" Jan 24 08:00:14 crc kubenswrapper[4705]: I0124 08:00:14.533745 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-j2ld4" podStartSLOduration=3.5337285400000003 podStartE2EDuration="3.53372854s" podCreationTimestamp="2026-01-24 08:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:00:14.530519299 +0000 UTC m=+1153.250392587" watchObservedRunningTime="2026-01-24 08:00:14.53372854 +0000 UTC m=+1153.253601828" Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.518295 4705 generic.go:334] "Generic (PLEG): container finished" podID="af3a2337-3d3f-4892-a825-5e88cc3cf834" containerID="5502486742c0464560f4b4dbdfc37e05af383f8cd794b0b3f78fee8b2545d32a" exitCode=0 Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.518619 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z5khx" event={"ID":"af3a2337-3d3f-4892-a825-5e88cc3cf834","Type":"ContainerDied","Data":"5502486742c0464560f4b4dbdfc37e05af383f8cd794b0b3f78fee8b2545d32a"} Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.521389 4705 generic.go:334] "Generic (PLEG): container finished" podID="874c88aa-a889-442c-92ea-1bdfe2a23761" containerID="38bde5cf69a417a30b55d89ca52e211f599c4d17b3bfccc698d7b8840997a4f4" exitCode=0 Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.522735 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1362-account-create-update-zfxhz" event={"ID":"874c88aa-a889-442c-92ea-1bdfe2a23761","Type":"ContainerDied","Data":"38bde5cf69a417a30b55d89ca52e211f599c4d17b3bfccc698d7b8840997a4f4"} Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.522798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1362-account-create-update-zfxhz" event={"ID":"874c88aa-a889-442c-92ea-1bdfe2a23761","Type":"ContainerStarted","Data":"842eda448a06aa97b116911bacacdefc13944230983c739f191c047356a95d2e"} Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.526009 4705 generic.go:334] "Generic (PLEG): container finished" podID="e9e9f514-b21b-430c-a352-124dbb196d6d" containerID="e504850f8af85801945fb4884114e302e53f532dd2ad546f50c60eac11b78dd6" exitCode=0 Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.526113 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f5d-account-create-update-897h6" event={"ID":"e9e9f514-b21b-430c-a352-124dbb196d6d","Type":"ContainerDied","Data":"e504850f8af85801945fb4884114e302e53f532dd2ad546f50c60eac11b78dd6"} Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.532508 4705 generic.go:334] "Generic (PLEG): container finished" podID="2084db85-248b-4371-86f6-4ff38216e099" containerID="58d6e93cdbd28215cee24b92af0f059cd38c8b92f55bd03048eded63f361eaba" exitCode=0 Jan 24 08:00:15 crc kubenswrapper[4705]: I0124 08:00:15.532577 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2ld4" event={"ID":"2084db85-248b-4371-86f6-4ff38216e099","Type":"ContainerDied","Data":"58d6e93cdbd28215cee24b92af0f059cd38c8b92f55bd03048eded63f361eaba"} Jan 24 08:00:16 crc kubenswrapper[4705]: I0124 08:00:16.955854 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-j6cfn"] Jan 24 08:00:16 crc kubenswrapper[4705]: E0124 08:00:16.958094 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f340c15-be68-45e8-b217-7770f329ea7e" containerName="mariadb-account-create-update" Jan 24 08:00:16 crc kubenswrapper[4705]: I0124 08:00:16.958109 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f340c15-be68-45e8-b217-7770f329ea7e" containerName="mariadb-account-create-update" Jan 24 08:00:16 crc kubenswrapper[4705]: I0124 08:00:16.958310 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f340c15-be68-45e8-b217-7770f329ea7e" containerName="mariadb-account-create-update" Jan 24 08:00:16 crc kubenswrapper[4705]: I0124 08:00:16.958837 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:16 crc kubenswrapper[4705]: I0124 08:00:16.983917 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j6cfn"] Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.003600 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.081991 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-dfc3-account-create-update-z59kd"] Jan 24 08:00:17 crc kubenswrapper[4705]: E0124 08:00:17.082443 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3a2337-3d3f-4892-a825-5e88cc3cf834" containerName="mariadb-database-create" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.082465 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3a2337-3d3f-4892-a825-5e88cc3cf834" containerName="mariadb-database-create" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.082671 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3a2337-3d3f-4892-a825-5e88cc3cf834" containerName="mariadb-database-create" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.083317 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.086427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.089528 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dfc3-account-create-update-z59kd"] Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.094652 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/474fa99b-e87b-40e0-9de8-7bffc3b57abe-operator-scripts\") pod \"glance-db-create-j6cfn\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.094859 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgqn4\" (UniqueName: \"kubernetes.io/projected/474fa99b-e87b-40e0-9de8-7bffc3b57abe-kube-api-access-sgqn4\") pod \"glance-db-create-j6cfn\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.106216 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dqhqz" podUID="e50e3aa7-48d0-4559-9f09-f0a9a54232a7" containerName="ovn-controller" probeResult="failure" output=< Jan 24 08:00:17 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 08:00:17 crc kubenswrapper[4705]: > Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.182233 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.187623 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-llw8s" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.196348 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2kg5\" (UniqueName: \"kubernetes.io/projected/af3a2337-3d3f-4892-a825-5e88cc3cf834-kube-api-access-r2kg5\") pod \"af3a2337-3d3f-4892-a825-5e88cc3cf834\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.196519 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af3a2337-3d3f-4892-a825-5e88cc3cf834-operator-scripts\") pod \"af3a2337-3d3f-4892-a825-5e88cc3cf834\" (UID: \"af3a2337-3d3f-4892-a825-5e88cc3cf834\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.196793 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/474fa99b-e87b-40e0-9de8-7bffc3b57abe-operator-scripts\") pod \"glance-db-create-j6cfn\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.196888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgqn4\" (UniqueName: \"kubernetes.io/projected/474fa99b-e87b-40e0-9de8-7bffc3b57abe-kube-api-access-sgqn4\") pod \"glance-db-create-j6cfn\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.196956 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5062aa1f-60d7-499c-9751-eb326a788033-operator-scripts\") pod \"glance-dfc3-account-create-update-z59kd\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.197056 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg7g5\" (UniqueName: \"kubernetes.io/projected/5062aa1f-60d7-499c-9751-eb326a788033-kube-api-access-dg7g5\") pod \"glance-dfc3-account-create-update-z59kd\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.197807 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af3a2337-3d3f-4892-a825-5e88cc3cf834-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af3a2337-3d3f-4892-a825-5e88cc3cf834" (UID: "af3a2337-3d3f-4892-a825-5e88cc3cf834"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.198588 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/474fa99b-e87b-40e0-9de8-7bffc3b57abe-operator-scripts\") pod \"glance-db-create-j6cfn\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.206412 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.215411 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3a2337-3d3f-4892-a825-5e88cc3cf834-kube-api-access-r2kg5" (OuterVolumeSpecName: "kube-api-access-r2kg5") pod "af3a2337-3d3f-4892-a825-5e88cc3cf834" (UID: "af3a2337-3d3f-4892-a825-5e88cc3cf834"). InnerVolumeSpecName "kube-api-access-r2kg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.219250 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.237371 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgqn4\" (UniqueName: \"kubernetes.io/projected/474fa99b-e87b-40e0-9de8-7bffc3b57abe-kube-api-access-sgqn4\") pod \"glance-db-create-j6cfn\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.328089 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.328150 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e9f514-b21b-430c-a352-124dbb196d6d-operator-scripts\") pod \"e9e9f514-b21b-430c-a352-124dbb196d6d\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.328380 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v95mn\" (UniqueName: \"kubernetes.io/projected/e9e9f514-b21b-430c-a352-124dbb196d6d-kube-api-access-v95mn\") pod \"e9e9f514-b21b-430c-a352-124dbb196d6d\" (UID: \"e9e9f514-b21b-430c-a352-124dbb196d6d\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.328599 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5062aa1f-60d7-499c-9751-eb326a788033-operator-scripts\") pod \"glance-dfc3-account-create-update-z59kd\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.328683 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg7g5\" (UniqueName: \"kubernetes.io/projected/5062aa1f-60d7-499c-9751-eb326a788033-kube-api-access-dg7g5\") pod \"glance-dfc3-account-create-update-z59kd\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.328749 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af3a2337-3d3f-4892-a825-5e88cc3cf834-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.328760 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2kg5\" (UniqueName: \"kubernetes.io/projected/af3a2337-3d3f-4892-a825-5e88cc3cf834-kube-api-access-r2kg5\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.329701 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5062aa1f-60d7-499c-9751-eb326a788033-operator-scripts\") pod \"glance-dfc3-account-create-update-z59kd\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.334367 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e9f514-b21b-430c-a352-124dbb196d6d-kube-api-access-v95mn" (OuterVolumeSpecName: "kube-api-access-v95mn") pod "e9e9f514-b21b-430c-a352-124dbb196d6d" (UID: "e9e9f514-b21b-430c-a352-124dbb196d6d"). InnerVolumeSpecName "kube-api-access-v95mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.335048 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e9f514-b21b-430c-a352-124dbb196d6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9e9f514-b21b-430c-a352-124dbb196d6d" (UID: "e9e9f514-b21b-430c-a352-124dbb196d6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.349231 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg7g5\" (UniqueName: \"kubernetes.io/projected/5062aa1f-60d7-499c-9751-eb326a788033-kube-api-access-dg7g5\") pod \"glance-dfc3-account-create-update-z59kd\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.434883 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874c88aa-a889-442c-92ea-1bdfe2a23761-operator-scripts\") pod \"874c88aa-a889-442c-92ea-1bdfe2a23761\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.434976 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2084db85-248b-4371-86f6-4ff38216e099-operator-scripts\") pod \"2084db85-248b-4371-86f6-4ff38216e099\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.435003 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-596xm\" (UniqueName: \"kubernetes.io/projected/874c88aa-a889-442c-92ea-1bdfe2a23761-kube-api-access-596xm\") pod \"874c88aa-a889-442c-92ea-1bdfe2a23761\" (UID: \"874c88aa-a889-442c-92ea-1bdfe2a23761\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.435027 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb424\" (UniqueName: \"kubernetes.io/projected/2084db85-248b-4371-86f6-4ff38216e099-kube-api-access-cb424\") pod \"2084db85-248b-4371-86f6-4ff38216e099\" (UID: \"2084db85-248b-4371-86f6-4ff38216e099\") " Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.435371 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v95mn\" (UniqueName: \"kubernetes.io/projected/e9e9f514-b21b-430c-a352-124dbb196d6d-kube-api-access-v95mn\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.435388 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e9f514-b21b-430c-a352-124dbb196d6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.435526 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874c88aa-a889-442c-92ea-1bdfe2a23761-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "874c88aa-a889-442c-92ea-1bdfe2a23761" (UID: "874c88aa-a889-442c-92ea-1bdfe2a23761"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.435531 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2084db85-248b-4371-86f6-4ff38216e099-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2084db85-248b-4371-86f6-4ff38216e099" (UID: "2084db85-248b-4371-86f6-4ff38216e099"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.438517 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874c88aa-a889-442c-92ea-1bdfe2a23761-kube-api-access-596xm" (OuterVolumeSpecName: "kube-api-access-596xm") pod "874c88aa-a889-442c-92ea-1bdfe2a23761" (UID: "874c88aa-a889-442c-92ea-1bdfe2a23761"). InnerVolumeSpecName "kube-api-access-596xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.440229 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2084db85-248b-4371-86f6-4ff38216e099-kube-api-access-cb424" (OuterVolumeSpecName: "kube-api-access-cb424") pod "2084db85-248b-4371-86f6-4ff38216e099" (UID: "2084db85-248b-4371-86f6-4ff38216e099"). InnerVolumeSpecName "kube-api-access-cb424". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.494173 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.537732 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874c88aa-a889-442c-92ea-1bdfe2a23761-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.537768 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2084db85-248b-4371-86f6-4ff38216e099-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.537781 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-596xm\" (UniqueName: \"kubernetes.io/projected/874c88aa-a889-442c-92ea-1bdfe2a23761-kube-api-access-596xm\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.537795 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb424\" (UniqueName: \"kubernetes.io/projected/2084db85-248b-4371-86f6-4ff38216e099-kube-api-access-cb424\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.564557 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z5khx" event={"ID":"af3a2337-3d3f-4892-a825-5e88cc3cf834","Type":"ContainerDied","Data":"09e0b7057c51ff498a4dc3782f748d61ac404b18e0e8c8693f69c1625e72b709"} Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.564591 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e0b7057c51ff498a4dc3782f748d61ac404b18e0e8c8693f69c1625e72b709" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.564645 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z5khx" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.584677 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1362-account-create-update-zfxhz" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.607607 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f5d-account-create-update-897h6" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.609752 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2ld4" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.618560 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1362-account-create-update-zfxhz" event={"ID":"874c88aa-a889-442c-92ea-1bdfe2a23761","Type":"ContainerDied","Data":"842eda448a06aa97b116911bacacdefc13944230983c739f191c047356a95d2e"} Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.618599 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842eda448a06aa97b116911bacacdefc13944230983c739f191c047356a95d2e" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.618630 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f5d-account-create-update-897h6" event={"ID":"e9e9f514-b21b-430c-a352-124dbb196d6d","Type":"ContainerDied","Data":"93220de5d09281b6f9a5ba6ee847c46c3e41bd206c91ea086d171f2dfdb8ebf6"} Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.618643 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93220de5d09281b6f9a5ba6ee847c46c3e41bd206c91ea086d171f2dfdb8ebf6" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.618653 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2ld4" event={"ID":"2084db85-248b-4371-86f6-4ff38216e099","Type":"ContainerDied","Data":"89c36361d9b0f84807aebc064a6433b28cdbf9724df5a8d199c6874204b888ec"} Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.618665 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c36361d9b0f84807aebc064a6433b28cdbf9724df5a8d199c6874204b888ec" Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.768027 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j6cfn"] Jan 24 08:00:17 crc kubenswrapper[4705]: I0124 08:00:17.969343 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dfc3-account-create-update-z59kd"] Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.185693 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4qgnt"] Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.194165 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4qgnt"] Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.267027 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xtf2r"] Jan 24 08:00:18 crc kubenswrapper[4705]: E0124 08:00:18.267470 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2084db85-248b-4371-86f6-4ff38216e099" containerName="mariadb-database-create" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.267493 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2084db85-248b-4371-86f6-4ff38216e099" containerName="mariadb-database-create" Jan 24 08:00:18 crc kubenswrapper[4705]: E0124 08:00:18.267505 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e9f514-b21b-430c-a352-124dbb196d6d" containerName="mariadb-account-create-update" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.267513 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e9f514-b21b-430c-a352-124dbb196d6d" containerName="mariadb-account-create-update" Jan 24 08:00:18 crc kubenswrapper[4705]: E0124 08:00:18.267529 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c88aa-a889-442c-92ea-1bdfe2a23761" containerName="mariadb-account-create-update" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.267538 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c88aa-a889-442c-92ea-1bdfe2a23761" containerName="mariadb-account-create-update" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.267746 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c88aa-a889-442c-92ea-1bdfe2a23761" containerName="mariadb-account-create-update" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.267769 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e9f514-b21b-430c-a352-124dbb196d6d" containerName="mariadb-account-create-update" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.267790 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2084db85-248b-4371-86f6-4ff38216e099" containerName="mariadb-database-create" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.268923 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.274211 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.305094 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xtf2r"] Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.359598 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfkn\" (UniqueName: \"kubernetes.io/projected/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-kube-api-access-qdfkn\") pod \"root-account-create-update-xtf2r\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.359679 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-operator-scripts\") pod \"root-account-create-update-xtf2r\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.460712 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfkn\" (UniqueName: \"kubernetes.io/projected/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-kube-api-access-qdfkn\") pod \"root-account-create-update-xtf2r\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.460800 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-operator-scripts\") pod \"root-account-create-update-xtf2r\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.461520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-operator-scripts\") pod \"root-account-create-update-xtf2r\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.483845 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfkn\" (UniqueName: \"kubernetes.io/projected/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-kube-api-access-qdfkn\") pod \"root-account-create-update-xtf2r\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.595881 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.617561 4705 generic.go:334] "Generic (PLEG): container finished" podID="5062aa1f-60d7-499c-9751-eb326a788033" containerID="21c402e96d6f5100b9d088fc00ee183be39dea415265269ddd4b52e3b4e69f7d" exitCode=0 Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.617627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dfc3-account-create-update-z59kd" event={"ID":"5062aa1f-60d7-499c-9751-eb326a788033","Type":"ContainerDied","Data":"21c402e96d6f5100b9d088fc00ee183be39dea415265269ddd4b52e3b4e69f7d"} Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.617651 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dfc3-account-create-update-z59kd" event={"ID":"5062aa1f-60d7-499c-9751-eb326a788033","Type":"ContainerStarted","Data":"ae33fc6bf110763c0cc235d29d933a3bd512420c156d0e0dbf07392be77ce24a"} Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.620377 4705 generic.go:334] "Generic (PLEG): container finished" podID="474fa99b-e87b-40e0-9de8-7bffc3b57abe" containerID="09d6783745d018637f9490bb61448483934129470d1ddb96b103992836e47078" exitCode=0 Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.620423 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j6cfn" event={"ID":"474fa99b-e87b-40e0-9de8-7bffc3b57abe","Type":"ContainerDied","Data":"09d6783745d018637f9490bb61448483934129470d1ddb96b103992836e47078"} Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.620463 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j6cfn" event={"ID":"474fa99b-e87b-40e0-9de8-7bffc3b57abe","Type":"ContainerStarted","Data":"29a3e81bf7b098c8b8b1eb533877f5aa1cabd0b670ee5d296b5fc300944b311b"} Jan 24 08:00:18 crc kubenswrapper[4705]: I0124 08:00:18.931975 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.033704 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-g6gpt"] Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.035380 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-g6gpt" podUID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerName="dnsmasq-dns" containerID="cri-o://856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02" gracePeriod=10 Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.202090 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xtf2r"] Jan 24 08:00:19 crc kubenswrapper[4705]: W0124 08:00:19.212630 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32cffcd2_4390_4ed4_b9ce_8e7fa312d13e.slice/crio-172322680c7d35dc9023e82b9e724cba9c8b330535d0ddc456093032ee5697bc WatchSource:0}: Error finding container 172322680c7d35dc9023e82b9e724cba9c8b330535d0ddc456093032ee5697bc: Status 404 returned error can't find the container with id 172322680c7d35dc9023e82b9e724cba9c8b330535d0ddc456093032ee5697bc Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.551072 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.585969 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f340c15-be68-45e8-b217-7770f329ea7e" path="/var/lib/kubelet/pods/1f340c15-be68-45e8-b217-7770f329ea7e/volumes" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.597497 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-sb\") pod \"469c2416-21a1-4157-9896-3d54ff2c9d02\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.597605 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-config\") pod \"469c2416-21a1-4157-9896-3d54ff2c9d02\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.597702 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-nb\") pod \"469c2416-21a1-4157-9896-3d54ff2c9d02\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.597736 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7fdd\" (UniqueName: \"kubernetes.io/projected/469c2416-21a1-4157-9896-3d54ff2c9d02-kube-api-access-x7fdd\") pod \"469c2416-21a1-4157-9896-3d54ff2c9d02\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.597798 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-dns-svc\") pod \"469c2416-21a1-4157-9896-3d54ff2c9d02\" (UID: \"469c2416-21a1-4157-9896-3d54ff2c9d02\") " Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.607323 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469c2416-21a1-4157-9896-3d54ff2c9d02-kube-api-access-x7fdd" (OuterVolumeSpecName: "kube-api-access-x7fdd") pod "469c2416-21a1-4157-9896-3d54ff2c9d02" (UID: "469c2416-21a1-4157-9896-3d54ff2c9d02"). InnerVolumeSpecName "kube-api-access-x7fdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.675501 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xtf2r" event={"ID":"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e","Type":"ContainerStarted","Data":"c2d84db0817656c74058aacf22582805fb1818b7a4e2557e3795cf218c6f760c"} Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.675582 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xtf2r" event={"ID":"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e","Type":"ContainerStarted","Data":"172322680c7d35dc9023e82b9e724cba9c8b330535d0ddc456093032ee5697bc"} Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.677696 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-config" (OuterVolumeSpecName: "config") pod "469c2416-21a1-4157-9896-3d54ff2c9d02" (UID: "469c2416-21a1-4157-9896-3d54ff2c9d02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.678952 4705 generic.go:334] "Generic (PLEG): container finished" podID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerID="3b1c3bad40fd8d7a85b987d091c4cf75d636a9ba03e19f88b0e3a1f4db4f1716" exitCode=0 Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.679017 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14a437d6-0b75-49b5-a509-e9dd8beefa45","Type":"ContainerDied","Data":"3b1c3bad40fd8d7a85b987d091c4cf75d636a9ba03e19f88b0e3a1f4db4f1716"} Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.681563 4705 generic.go:334] "Generic (PLEG): container finished" podID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerID="856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02" exitCode=0 Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.681959 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-g6gpt" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.682627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-g6gpt" event={"ID":"469c2416-21a1-4157-9896-3d54ff2c9d02","Type":"ContainerDied","Data":"856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02"} Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.682669 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-g6gpt" event={"ID":"469c2416-21a1-4157-9896-3d54ff2c9d02","Type":"ContainerDied","Data":"4eaf383e99df3aa322828810250c6a88413292ee7942444ee085b2905ab233ec"} Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.682705 4705 scope.go:117] "RemoveContainer" containerID="856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.683968 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "469c2416-21a1-4157-9896-3d54ff2c9d02" (UID: "469c2416-21a1-4157-9896-3d54ff2c9d02"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.694920 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "469c2416-21a1-4157-9896-3d54ff2c9d02" (UID: "469c2416-21a1-4157-9896-3d54ff2c9d02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.700657 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.700686 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.700701 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7fdd\" (UniqueName: \"kubernetes.io/projected/469c2416-21a1-4157-9896-3d54ff2c9d02-kube-api-access-x7fdd\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.700711 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.705761 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xtf2r" podStartSLOduration=1.705743632 podStartE2EDuration="1.705743632s" podCreationTimestamp="2026-01-24 08:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:00:19.691452319 +0000 UTC m=+1158.411325617" watchObservedRunningTime="2026-01-24 08:00:19.705743632 +0000 UTC m=+1158.425616920" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.724990 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "469c2416-21a1-4157-9896-3d54ff2c9d02" (UID: "469c2416-21a1-4157-9896-3d54ff2c9d02"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.731586 4705 scope.go:117] "RemoveContainer" containerID="8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.802318 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/469c2416-21a1-4157-9896-3d54ff2c9d02-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.828391 4705 scope.go:117] "RemoveContainer" containerID="856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02" Jan 24 08:00:19 crc kubenswrapper[4705]: E0124 08:00:19.828895 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02\": container with ID starting with 856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02 not found: ID does not exist" containerID="856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.828931 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02"} err="failed to get container status \"856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02\": rpc error: code = NotFound desc = could not find container \"856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02\": container with ID starting with 856ac80664fe1f3e00b7c34ac7ae6b1502ec6b76e6b64d782a4769ac3e9beb02 not found: ID does not exist" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.828955 4705 scope.go:117] "RemoveContainer" containerID="8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5" Jan 24 08:00:19 crc kubenswrapper[4705]: E0124 08:00:19.830236 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5\": container with ID starting with 8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5 not found: ID does not exist" containerID="8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5" Jan 24 08:00:19 crc kubenswrapper[4705]: I0124 08:00:19.830258 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5"} err="failed to get container status \"8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5\": rpc error: code = NotFound desc = could not find container \"8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5\": container with ID starting with 8ae02947e3c378c0de06d554727b6fc1de86dd02e6fc86283002411a84d48fd5 not found: ID does not exist" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.029073 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-g6gpt"] Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.035687 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-g6gpt"] Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.198285 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.204003 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.313301 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgqn4\" (UniqueName: \"kubernetes.io/projected/474fa99b-e87b-40e0-9de8-7bffc3b57abe-kube-api-access-sgqn4\") pod \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.313414 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/474fa99b-e87b-40e0-9de8-7bffc3b57abe-operator-scripts\") pod \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\" (UID: \"474fa99b-e87b-40e0-9de8-7bffc3b57abe\") " Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.313470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5062aa1f-60d7-499c-9751-eb326a788033-operator-scripts\") pod \"5062aa1f-60d7-499c-9751-eb326a788033\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.313548 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg7g5\" (UniqueName: \"kubernetes.io/projected/5062aa1f-60d7-499c-9751-eb326a788033-kube-api-access-dg7g5\") pod \"5062aa1f-60d7-499c-9751-eb326a788033\" (UID: \"5062aa1f-60d7-499c-9751-eb326a788033\") " Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.314225 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/474fa99b-e87b-40e0-9de8-7bffc3b57abe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "474fa99b-e87b-40e0-9de8-7bffc3b57abe" (UID: "474fa99b-e87b-40e0-9de8-7bffc3b57abe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.314365 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5062aa1f-60d7-499c-9751-eb326a788033-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5062aa1f-60d7-499c-9751-eb326a788033" (UID: "5062aa1f-60d7-499c-9751-eb326a788033"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.318670 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474fa99b-e87b-40e0-9de8-7bffc3b57abe-kube-api-access-sgqn4" (OuterVolumeSpecName: "kube-api-access-sgqn4") pod "474fa99b-e87b-40e0-9de8-7bffc3b57abe" (UID: "474fa99b-e87b-40e0-9de8-7bffc3b57abe"). InnerVolumeSpecName "kube-api-access-sgqn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.318772 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5062aa1f-60d7-499c-9751-eb326a788033-kube-api-access-dg7g5" (OuterVolumeSpecName: "kube-api-access-dg7g5") pod "5062aa1f-60d7-499c-9751-eb326a788033" (UID: "5062aa1f-60d7-499c-9751-eb326a788033"). InnerVolumeSpecName "kube-api-access-dg7g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.415385 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/474fa99b-e87b-40e0-9de8-7bffc3b57abe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.415441 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5062aa1f-60d7-499c-9751-eb326a788033-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.415458 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg7g5\" (UniqueName: \"kubernetes.io/projected/5062aa1f-60d7-499c-9751-eb326a788033-kube-api-access-dg7g5\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.415471 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgqn4\" (UniqueName: \"kubernetes.io/projected/474fa99b-e87b-40e0-9de8-7bffc3b57abe-kube-api-access-sgqn4\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.691647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dfc3-account-create-update-z59kd" event={"ID":"5062aa1f-60d7-499c-9751-eb326a788033","Type":"ContainerDied","Data":"ae33fc6bf110763c0cc235d29d933a3bd512420c156d0e0dbf07392be77ce24a"} Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.692055 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae33fc6bf110763c0cc235d29d933a3bd512420c156d0e0dbf07392be77ce24a" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.691653 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dfc3-account-create-update-z59kd" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.692773 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j6cfn" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.693836 4705 generic.go:334] "Generic (PLEG): container finished" podID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerID="241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142" exitCode=0 Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.694950 4705 generic.go:334] "Generic (PLEG): container finished" podID="32cffcd2-4390-4ed4-b9ce-8e7fa312d13e" containerID="c2d84db0817656c74058aacf22582805fb1818b7a4e2557e3795cf218c6f760c" exitCode=0 Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.697215 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j6cfn" event={"ID":"474fa99b-e87b-40e0-9de8-7bffc3b57abe","Type":"ContainerDied","Data":"29a3e81bf7b098c8b8b1eb533877f5aa1cabd0b670ee5d296b5fc300944b311b"} Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.697279 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a3e81bf7b098c8b8b1eb533877f5aa1cabd0b670ee5d296b5fc300944b311b" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.697302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57","Type":"ContainerDied","Data":"241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142"} Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.697413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xtf2r" event={"ID":"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e","Type":"ContainerDied","Data":"c2d84db0817656c74058aacf22582805fb1818b7a4e2557e3795cf218c6f760c"} Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.697430 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14a437d6-0b75-49b5-a509-e9dd8beefa45","Type":"ContainerStarted","Data":"025cbcf5ea07007fc61cf1337223083ce6bad8d6533d249f649ca891701fdc71"} Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.697689 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:00:20 crc kubenswrapper[4705]: I0124 08:00:20.738505 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.932727797 podStartE2EDuration="1m23.738488332s" podCreationTimestamp="2026-01-24 07:58:57 +0000 UTC" firstStartedPulling="2026-01-24 07:59:00.885452269 +0000 UTC m=+1079.605325557" lastFinishedPulling="2026-01-24 07:59:42.691212814 +0000 UTC m=+1121.411086092" observedRunningTime="2026-01-24 08:00:20.736791864 +0000 UTC m=+1159.456665152" watchObservedRunningTime="2026-01-24 08:00:20.738488332 +0000 UTC m=+1159.458361620" Jan 24 08:00:21 crc kubenswrapper[4705]: I0124 08:00:21.338901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:21 crc kubenswrapper[4705]: E0124 08:00:21.339408 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 08:00:21 crc kubenswrapper[4705]: E0124 08:00:21.339446 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 08:00:21 crc kubenswrapper[4705]: E0124 08:00:21.339510 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift podName:2521bbad-8785-4fbf-94fe-7309e9fe3442 nodeName:}" failed. No retries permitted until 2026-01-24 08:00:37.339488561 +0000 UTC m=+1176.059361849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift") pod "swift-storage-0" (UID: "2521bbad-8785-4fbf-94fe-7309e9fe3442") : configmap "swift-ring-files" not found Jan 24 08:00:21 crc kubenswrapper[4705]: I0124 08:00:21.587273 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469c2416-21a1-4157-9896-3d54ff2c9d02" path="/var/lib/kubelet/pods/469c2416-21a1-4157-9896-3d54ff2c9d02/volumes" Jan 24 08:00:21 crc kubenswrapper[4705]: I0124 08:00:21.705952 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57","Type":"ContainerStarted","Data":"ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980"} Jan 24 08:00:21 crc kubenswrapper[4705]: I0124 08:00:21.706679 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 08:00:21 crc kubenswrapper[4705]: I0124 08:00:21.741289 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371952.113504 podStartE2EDuration="1m24.741271508s" podCreationTimestamp="2026-01-24 07:58:57 +0000 UTC" firstStartedPulling="2026-01-24 07:58:59.800295962 +0000 UTC m=+1078.520169250" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:00:21.737982655 +0000 UTC m=+1160.457855943" watchObservedRunningTime="2026-01-24 08:00:21.741271508 +0000 UTC m=+1160.461144796" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.107890 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dqhqz" podUID="e50e3aa7-48d0-4559-9f09-f0a9a54232a7" containerName="ovn-controller" probeResult="failure" output=< Jan 24 08:00:22 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 08:00:22 crc kubenswrapper[4705]: > Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.187852 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-llw8s" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.339018 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.443662 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zlxm7"] Jan 24 08:00:22 crc kubenswrapper[4705]: E0124 08:00:22.444036 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerName="init" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444065 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerName="init" Jan 24 08:00:22 crc kubenswrapper[4705]: E0124 08:00:22.444082 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5062aa1f-60d7-499c-9751-eb326a788033" containerName="mariadb-account-create-update" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444090 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5062aa1f-60d7-499c-9751-eb326a788033" containerName="mariadb-account-create-update" Jan 24 08:00:22 crc kubenswrapper[4705]: E0124 08:00:22.444101 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cffcd2-4390-4ed4-b9ce-8e7fa312d13e" containerName="mariadb-account-create-update" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444109 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cffcd2-4390-4ed4-b9ce-8e7fa312d13e" containerName="mariadb-account-create-update" Jan 24 08:00:22 crc kubenswrapper[4705]: E0124 08:00:22.444129 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474fa99b-e87b-40e0-9de8-7bffc3b57abe" containerName="mariadb-database-create" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444136 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="474fa99b-e87b-40e0-9de8-7bffc3b57abe" containerName="mariadb-database-create" Jan 24 08:00:22 crc kubenswrapper[4705]: E0124 08:00:22.444152 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerName="dnsmasq-dns" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444159 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerName="dnsmasq-dns" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444442 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="474fa99b-e87b-40e0-9de8-7bffc3b57abe" containerName="mariadb-database-create" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444464 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="469c2416-21a1-4157-9896-3d54ff2c9d02" containerName="dnsmasq-dns" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444477 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5062aa1f-60d7-499c-9751-eb326a788033" containerName="mariadb-account-create-update" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.444493 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cffcd2-4390-4ed4-b9ce-8e7fa312d13e" containerName="mariadb-account-create-update" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.445120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.447217 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.447357 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ggqbv" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.466207 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zlxm7"] Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.478447 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-operator-scripts\") pod \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.478502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdfkn\" (UniqueName: \"kubernetes.io/projected/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-kube-api-access-qdfkn\") pod \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\" (UID: \"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e\") " Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.481581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32cffcd2-4390-4ed4-b9ce-8e7fa312d13e" (UID: "32cffcd2-4390-4ed4-b9ce-8e7fa312d13e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.494275 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dqhqz-config-nwwjj"] Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.495696 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.505770 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-kube-api-access-qdfkn" (OuterVolumeSpecName: "kube-api-access-qdfkn") pod "32cffcd2-4390-4ed4-b9ce-8e7fa312d13e" (UID: "32cffcd2-4390-4ed4-b9ce-8e7fa312d13e"). InnerVolumeSpecName "kube-api-access-qdfkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.505810 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.514276 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dqhqz-config-nwwjj"] Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.579872 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-config-data\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.579932 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-db-sync-config-data\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.579990 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg94x\" (UniqueName: \"kubernetes.io/projected/d40a2abc-33c9-4284-af71-03fc828b92d2-kube-api-access-tg94x\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.580247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-combined-ca-bundle\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.580463 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.580494 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdfkn\" (UniqueName: \"kubernetes.io/projected/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e-kube-api-access-qdfkn\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682049 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-additional-scripts\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682100 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg94x\" (UniqueName: \"kubernetes.io/projected/d40a2abc-33c9-4284-af71-03fc828b92d2-kube-api-access-tg94x\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682143 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682190 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjf5\" (UniqueName: \"kubernetes.io/projected/34604d31-8038-4567-ba32-2c830061841b-kube-api-access-dtjf5\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682211 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run-ovn\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682248 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-combined-ca-bundle\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682277 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-log-ovn\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-scripts\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682422 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-config-data\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.682453 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-db-sync-config-data\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.687502 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-combined-ca-bundle\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.687949 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-db-sync-config-data\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.703609 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-config-data\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.705486 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg94x\" (UniqueName: \"kubernetes.io/projected/d40a2abc-33c9-4284-af71-03fc828b92d2-kube-api-access-tg94x\") pod \"glance-db-sync-zlxm7\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.713621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xtf2r" event={"ID":"32cffcd2-4390-4ed4-b9ce-8e7fa312d13e","Type":"ContainerDied","Data":"172322680c7d35dc9023e82b9e724cba9c8b330535d0ddc456093032ee5697bc"} Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.713654 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172322680c7d35dc9023e82b9e724cba9c8b330535d0ddc456093032ee5697bc" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.713674 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xtf2r" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.762574 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlxm7" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.783610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-additional-scripts\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.783669 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.783711 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjf5\" (UniqueName: \"kubernetes.io/projected/34604d31-8038-4567-ba32-2c830061841b-kube-api-access-dtjf5\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.783731 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run-ovn\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.783758 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-log-ovn\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.783814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-scripts\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.785677 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-scripts\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.786179 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-additional-scripts\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.786429 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.786719 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run-ovn\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.786760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-log-ovn\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.804729 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjf5\" (UniqueName: \"kubernetes.io/projected/34604d31-8038-4567-ba32-2c830061841b-kube-api-access-dtjf5\") pod \"ovn-controller-dqhqz-config-nwwjj\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:22 crc kubenswrapper[4705]: I0124 08:00:22.839464 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:23 crc kubenswrapper[4705]: I0124 08:00:23.745749 4705 generic.go:334] "Generic (PLEG): container finished" podID="71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" containerID="b68a86bb6be9a59483119d97082afbc5e87f3ee6868190f554fe8bc0d9a9970c" exitCode=0 Jan 24 08:00:23 crc kubenswrapper[4705]: I0124 08:00:23.745849 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wvgxh" event={"ID":"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e","Type":"ContainerDied","Data":"b68a86bb6be9a59483119d97082afbc5e87f3ee6868190f554fe8bc0d9a9970c"} Jan 24 08:00:23 crc kubenswrapper[4705]: I0124 08:00:23.827667 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dqhqz-config-nwwjj"] Jan 24 08:00:23 crc kubenswrapper[4705]: I0124 08:00:23.890740 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zlxm7"] Jan 24 08:00:23 crc kubenswrapper[4705]: W0124 08:00:23.917312 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40a2abc_33c9_4284_af71_03fc828b92d2.slice/crio-60f6babc4b4667f834bdc18cfd3fe12fa2d150fdc3802b5c989880e3d23e925f WatchSource:0}: Error finding container 60f6babc4b4667f834bdc18cfd3fe12fa2d150fdc3802b5c989880e3d23e925f: Status 404 returned error can't find the container with id 60f6babc4b4667f834bdc18cfd3fe12fa2d150fdc3802b5c989880e3d23e925f Jan 24 08:00:24 crc kubenswrapper[4705]: I0124 08:00:24.530375 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xtf2r"] Jan 24 08:00:24 crc kubenswrapper[4705]: I0124 08:00:24.536736 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xtf2r"] Jan 24 08:00:24 crc kubenswrapper[4705]: I0124 08:00:24.763685 4705 generic.go:334] "Generic (PLEG): container finished" podID="34604d31-8038-4567-ba32-2c830061841b" containerID="d97aa0b07a91a2345b7a139371b45c68f6552fc608b3b72c6226883c22cbb979" exitCode=0 Jan 24 08:00:24 crc kubenswrapper[4705]: I0124 08:00:24.763744 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dqhqz-config-nwwjj" event={"ID":"34604d31-8038-4567-ba32-2c830061841b","Type":"ContainerDied","Data":"d97aa0b07a91a2345b7a139371b45c68f6552fc608b3b72c6226883c22cbb979"} Jan 24 08:00:24 crc kubenswrapper[4705]: I0124 08:00:24.763810 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dqhqz-config-nwwjj" event={"ID":"34604d31-8038-4567-ba32-2c830061841b","Type":"ContainerStarted","Data":"99d720aa106c642db14d5edc730cdac277a5fb59e8e5e752fec506822e3f7d9c"} Jan 24 08:00:24 crc kubenswrapper[4705]: I0124 08:00:24.766205 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlxm7" event={"ID":"d40a2abc-33c9-4284-af71-03fc828b92d2","Type":"ContainerStarted","Data":"60f6babc4b4667f834bdc18cfd3fe12fa2d150fdc3802b5c989880e3d23e925f"} Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.119756 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.254320 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-combined-ca-bundle\") pod \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.254461 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-ring-data-devices\") pod \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.254491 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-dispersionconf\") pod \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.254522 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-scripts\") pod \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.254559 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-etc-swift\") pod \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.254627 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-swiftconf\") pod \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.254679 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhcbf\" (UniqueName: \"kubernetes.io/projected/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-kube-api-access-xhcbf\") pod \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\" (UID: \"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e\") " Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.255363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" (UID: "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.256259 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" (UID: "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.265698 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" (UID: "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.281738 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-scripts" (OuterVolumeSpecName: "scripts") pod "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" (UID: "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.283130 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-kube-api-access-xhcbf" (OuterVolumeSpecName: "kube-api-access-xhcbf") pod "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" (UID: "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e"). InnerVolumeSpecName "kube-api-access-xhcbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.286070 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" (UID: "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.322434 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" (UID: "71ae95bb-0592-4ebd-b74a-c2ed2cc5654e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.357022 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.357278 4705 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.357287 4705 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.357298 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.357306 4705 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.357313 4705 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.357322 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhcbf\" (UniqueName: \"kubernetes.io/projected/71ae95bb-0592-4ebd-b74a-c2ed2cc5654e-kube-api-access-xhcbf\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.589627 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cffcd2-4390-4ed4-b9ce-8e7fa312d13e" path="/var/lib/kubelet/pods/32cffcd2-4390-4ed4-b9ce-8e7fa312d13e/volumes" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.860663 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wvgxh" Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.860719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wvgxh" event={"ID":"71ae95bb-0592-4ebd-b74a-c2ed2cc5654e","Type":"ContainerDied","Data":"94a0edcc855fc4c73c482318af2f65219dac1105d25572fc1300e0d1135fe332"} Jan 24 08:00:25 crc kubenswrapper[4705]: I0124 08:00:25.860744 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a0edcc855fc4c73c482318af2f65219dac1105d25572fc1300e0d1135fe332" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.545368 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.630879 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-log-ovn\") pod \"34604d31-8038-4567-ba32-2c830061841b\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.630985 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtjf5\" (UniqueName: \"kubernetes.io/projected/34604d31-8038-4567-ba32-2c830061841b-kube-api-access-dtjf5\") pod \"34604d31-8038-4567-ba32-2c830061841b\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.631016 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run-ovn\") pod \"34604d31-8038-4567-ba32-2c830061841b\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.631042 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-additional-scripts\") pod \"34604d31-8038-4567-ba32-2c830061841b\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.631110 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run\") pod \"34604d31-8038-4567-ba32-2c830061841b\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.631131 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-scripts\") pod \"34604d31-8038-4567-ba32-2c830061841b\" (UID: \"34604d31-8038-4567-ba32-2c830061841b\") " Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.631537 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "34604d31-8038-4567-ba32-2c830061841b" (UID: "34604d31-8038-4567-ba32-2c830061841b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.631666 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run" (OuterVolumeSpecName: "var-run") pod "34604d31-8038-4567-ba32-2c830061841b" (UID: "34604d31-8038-4567-ba32-2c830061841b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.631723 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "34604d31-8038-4567-ba32-2c830061841b" (UID: "34604d31-8038-4567-ba32-2c830061841b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.632170 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "34604d31-8038-4567-ba32-2c830061841b" (UID: "34604d31-8038-4567-ba32-2c830061841b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.632811 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-scripts" (OuterVolumeSpecName: "scripts") pod "34604d31-8038-4567-ba32-2c830061841b" (UID: "34604d31-8038-4567-ba32-2c830061841b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.635191 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34604d31-8038-4567-ba32-2c830061841b-kube-api-access-dtjf5" (OuterVolumeSpecName: "kube-api-access-dtjf5") pod "34604d31-8038-4567-ba32-2c830061841b" (UID: "34604d31-8038-4567-ba32-2c830061841b"). InnerVolumeSpecName "kube-api-access-dtjf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.734128 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtjf5\" (UniqueName: \"kubernetes.io/projected/34604d31-8038-4567-ba32-2c830061841b-kube-api-access-dtjf5\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.734179 4705 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.734191 4705 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.734200 4705 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-run\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.734224 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34604d31-8038-4567-ba32-2c830061841b-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.734232 4705 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34604d31-8038-4567-ba32-2c830061841b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.869199 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dqhqz-config-nwwjj" event={"ID":"34604d31-8038-4567-ba32-2c830061841b","Type":"ContainerDied","Data":"99d720aa106c642db14d5edc730cdac277a5fb59e8e5e752fec506822e3f7d9c"} Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.869243 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d720aa106c642db14d5edc730cdac277a5fb59e8e5e752fec506822e3f7d9c" Jan 24 08:00:26 crc kubenswrapper[4705]: I0124 08:00:26.869286 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dqhqz-config-nwwjj" Jan 24 08:00:27 crc kubenswrapper[4705]: I0124 08:00:27.106072 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dqhqz" Jan 24 08:00:27 crc kubenswrapper[4705]: I0124 08:00:27.646617 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dqhqz-config-nwwjj"] Jan 24 08:00:27 crc kubenswrapper[4705]: I0124 08:00:27.654185 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dqhqz-config-nwwjj"] Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.535461 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qrbfv"] Jan 24 08:00:29 crc kubenswrapper[4705]: E0124 08:00:29.535913 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" containerName="swift-ring-rebalance" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.535931 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" containerName="swift-ring-rebalance" Jan 24 08:00:29 crc kubenswrapper[4705]: E0124 08:00:29.535969 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34604d31-8038-4567-ba32-2c830061841b" containerName="ovn-config" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.535978 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="34604d31-8038-4567-ba32-2c830061841b" containerName="ovn-config" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.536199 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ae95bb-0592-4ebd-b74a-c2ed2cc5654e" containerName="swift-ring-rebalance" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.536230 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="34604d31-8038-4567-ba32-2c830061841b" containerName="ovn-config" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.536896 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.539401 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.547406 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qrbfv"] Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.556997 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.594456 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34604d31-8038-4567-ba32-2c830061841b" path="/var/lib/kubelet/pods/34604d31-8038-4567-ba32-2c830061841b/volumes" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.674671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgmx\" (UniqueName: \"kubernetes.io/projected/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-kube-api-access-zlgmx\") pod \"root-account-create-update-qrbfv\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.674723 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-operator-scripts\") pod \"root-account-create-update-qrbfv\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.777178 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlgmx\" (UniqueName: \"kubernetes.io/projected/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-kube-api-access-zlgmx\") pod \"root-account-create-update-qrbfv\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.777232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-operator-scripts\") pod \"root-account-create-update-qrbfv\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.778114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-operator-scripts\") pod \"root-account-create-update-qrbfv\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.803442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlgmx\" (UniqueName: \"kubernetes.io/projected/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-kube-api-access-zlgmx\") pod \"root-account-create-update-qrbfv\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:29 crc kubenswrapper[4705]: I0124 08:00:29.860537 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:30 crc kubenswrapper[4705]: I0124 08:00:30.481318 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qrbfv"] Jan 24 08:00:30 crc kubenswrapper[4705]: I0124 08:00:30.915002 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qrbfv" event={"ID":"8e7afb13-2fb5-4520-acfc-d52cb558cd6c","Type":"ContainerStarted","Data":"d753f39bce964afd48a7c776f2d15017fa49313a7a38610bf981e063c3defd93"} Jan 24 08:00:31 crc kubenswrapper[4705]: I0124 08:00:31.924040 4705 generic.go:334] "Generic (PLEG): container finished" podID="8e7afb13-2fb5-4520-acfc-d52cb558cd6c" containerID="25d18c478c1e9b79b6b4106c2375879eeb69eea8cb581bb3a51f6627d776fa8d" exitCode=0 Jan 24 08:00:31 crc kubenswrapper[4705]: I0124 08:00:31.924195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qrbfv" event={"ID":"8e7afb13-2fb5-4520-acfc-d52cb558cd6c","Type":"ContainerDied","Data":"25d18c478c1e9b79b6b4106c2375879eeb69eea8cb581bb3a51f6627d776fa8d"} Jan 24 08:00:37 crc kubenswrapper[4705]: I0124 08:00:37.071110 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:00:37 crc kubenswrapper[4705]: I0124 08:00:37.072397 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:00:37 crc kubenswrapper[4705]: I0124 08:00:37.357065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:37 crc kubenswrapper[4705]: I0124 08:00:37.364969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2521bbad-8785-4fbf-94fe-7309e9fe3442-etc-swift\") pod \"swift-storage-0\" (UID: \"2521bbad-8785-4fbf-94fe-7309e9fe3442\") " pod="openstack/swift-storage-0" Jan 24 08:00:37 crc kubenswrapper[4705]: I0124 08:00:37.499172 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 08:00:38 crc kubenswrapper[4705]: I0124 08:00:38.913022 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.368481 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0742-account-create-update-sh6tr"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.369867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.371737 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.452391 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0742-account-create-update-sh6tr"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.467614 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4skzp"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.478991 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0931e94d-dcf4-447e-bbda-071ae0b176ec-operator-scripts\") pod \"barbican-0742-account-create-update-sh6tr\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.479176 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtwr8\" (UniqueName: \"kubernetes.io/projected/0931e94d-dcf4-447e-bbda-071ae0b176ec-kube-api-access-rtwr8\") pod \"barbican-0742-account-create-update-sh6tr\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.533168 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4skzp"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.533269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.559154 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xx2zn"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.560233 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.565957 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xx2zn"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.581309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtwr8\" (UniqueName: \"kubernetes.io/projected/0931e94d-dcf4-447e-bbda-071ae0b176ec-kube-api-access-rtwr8\") pod \"barbican-0742-account-create-update-sh6tr\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.581391 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0931e94d-dcf4-447e-bbda-071ae0b176ec-operator-scripts\") pod \"barbican-0742-account-create-update-sh6tr\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.594733 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0931e94d-dcf4-447e-bbda-071ae0b176ec-operator-scripts\") pod \"barbican-0742-account-create-update-sh6tr\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.801552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lblwb\" (UniqueName: \"kubernetes.io/projected/28b7cacf-32a2-48d0-af65-162d7b360d89-kube-api-access-lblwb\") pod \"cinder-db-create-4skzp\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.801617 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b7cacf-32a2-48d0-af65-162d7b360d89-operator-scripts\") pod \"cinder-db-create-4skzp\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.801681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87j7j\" (UniqueName: \"kubernetes.io/projected/05c9a82f-9177-4a55-8059-6a498bbf927d-kube-api-access-87j7j\") pod \"barbican-db-create-xx2zn\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.801735 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c9a82f-9177-4a55-8059-6a498bbf927d-operator-scripts\") pod \"barbican-db-create-xx2zn\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.830646 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtwr8\" (UniqueName: \"kubernetes.io/projected/0931e94d-dcf4-447e-bbda-071ae0b176ec-kube-api-access-rtwr8\") pod \"barbican-0742-account-create-update-sh6tr\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.845732 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b6a9-account-create-update-vlfsr"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.872109 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.881227 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.903057 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b6a9-account-create-update-vlfsr"] Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.932918 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87j7j\" (UniqueName: \"kubernetes.io/projected/05c9a82f-9177-4a55-8059-6a498bbf927d-kube-api-access-87j7j\") pod \"barbican-db-create-xx2zn\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.932980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c9a82f-9177-4a55-8059-6a498bbf927d-operator-scripts\") pod \"barbican-db-create-xx2zn\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.933016 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5d096e4-c16f-4958-a1ea-d38cdecf18da-operator-scripts\") pod \"cinder-b6a9-account-create-update-vlfsr\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.933058 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vx9x\" (UniqueName: \"kubernetes.io/projected/f5d096e4-c16f-4958-a1ea-d38cdecf18da-kube-api-access-5vx9x\") pod \"cinder-b6a9-account-create-update-vlfsr\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.933148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lblwb\" (UniqueName: \"kubernetes.io/projected/28b7cacf-32a2-48d0-af65-162d7b360d89-kube-api-access-lblwb\") pod \"cinder-db-create-4skzp\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.933172 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b7cacf-32a2-48d0-af65-162d7b360d89-operator-scripts\") pod \"cinder-db-create-4skzp\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.933808 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b7cacf-32a2-48d0-af65-162d7b360d89-operator-scripts\") pod \"cinder-db-create-4skzp\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.934631 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c9a82f-9177-4a55-8059-6a498bbf927d-operator-scripts\") pod \"barbican-db-create-xx2zn\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.982740 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lblwb\" (UniqueName: \"kubernetes.io/projected/28b7cacf-32a2-48d0-af65-162d7b360d89-kube-api-access-lblwb\") pod \"cinder-db-create-4skzp\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:39 crc kubenswrapper[4705]: I0124 08:00:39.986612 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87j7j\" (UniqueName: \"kubernetes.io/projected/05c9a82f-9177-4a55-8059-6a498bbf927d-kube-api-access-87j7j\") pod \"barbican-db-create-xx2zn\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.022475 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-fsqnk"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.023710 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.035273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5d096e4-c16f-4958-a1ea-d38cdecf18da-operator-scripts\") pod \"cinder-b6a9-account-create-update-vlfsr\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.035333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vx9x\" (UniqueName: \"kubernetes.io/projected/f5d096e4-c16f-4958-a1ea-d38cdecf18da-kube-api-access-5vx9x\") pod \"cinder-b6a9-account-create-update-vlfsr\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.037617 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5d096e4-c16f-4958-a1ea-d38cdecf18da-operator-scripts\") pod \"cinder-b6a9-account-create-update-vlfsr\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.051516 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.066201 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fsqnk"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.109619 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vx9x\" (UniqueName: \"kubernetes.io/projected/f5d096e4-c16f-4958-a1ea-d38cdecf18da-kube-api-access-5vx9x\") pod \"cinder-b6a9-account-create-update-vlfsr\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.137943 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nplvc\" (UniqueName: \"kubernetes.io/projected/f637d189-c592-4ff2-96a3-8b001688a84f-kube-api-access-nplvc\") pod \"heat-db-create-fsqnk\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.138129 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637d189-c592-4ff2-96a3-8b001688a84f-operator-scripts\") pod \"heat-db-create-fsqnk\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.148979 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5q8ws"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.150193 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.159530 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.160430 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.160776 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gs7v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.170262 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lmg2v"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.173559 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.176226 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.176444 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.180478 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.212125 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lmg2v"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.237381 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.237704 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5q8ws"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.242114 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6564fea8-f9ad-47ed-90fa-1d08616f0b60-operator-scripts\") pod \"neutron-db-create-lmg2v\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.242170 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nplvc\" (UniqueName: \"kubernetes.io/projected/f637d189-c592-4ff2-96a3-8b001688a84f-kube-api-access-nplvc\") pod \"heat-db-create-fsqnk\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.242213 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-combined-ca-bundle\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.242263 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pswv\" (UniqueName: \"kubernetes.io/projected/6564fea8-f9ad-47ed-90fa-1d08616f0b60-kube-api-access-2pswv\") pod \"neutron-db-create-lmg2v\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.242310 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637d189-c592-4ff2-96a3-8b001688a84f-operator-scripts\") pod \"heat-db-create-fsqnk\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.242332 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-config-data\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.242351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmlb\" (UniqueName: \"kubernetes.io/projected/a05482d0-25f0-4382-bfdb-ff3053e44366-kube-api-access-8tmlb\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.243379 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637d189-c592-4ff2-96a3-8b001688a84f-operator-scripts\") pod \"heat-db-create-fsqnk\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.292074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nplvc\" (UniqueName: \"kubernetes.io/projected/f637d189-c592-4ff2-96a3-8b001688a84f-kube-api-access-nplvc\") pod \"heat-db-create-fsqnk\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.343793 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6564fea8-f9ad-47ed-90fa-1d08616f0b60-operator-scripts\") pod \"neutron-db-create-lmg2v\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.344455 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-combined-ca-bundle\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.344608 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pswv\" (UniqueName: \"kubernetes.io/projected/6564fea8-f9ad-47ed-90fa-1d08616f0b60-kube-api-access-2pswv\") pod \"neutron-db-create-lmg2v\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.346259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-config-data\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.346435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmlb\" (UniqueName: \"kubernetes.io/projected/a05482d0-25f0-4382-bfdb-ff3053e44366-kube-api-access-8tmlb\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.346651 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6564fea8-f9ad-47ed-90fa-1d08616f0b60-operator-scripts\") pod \"neutron-db-create-lmg2v\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.348774 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-combined-ca-bundle\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.352724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-config-data\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.376166 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pswv\" (UniqueName: \"kubernetes.io/projected/6564fea8-f9ad-47ed-90fa-1d08616f0b60-kube-api-access-2pswv\") pod \"neutron-db-create-lmg2v\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.425361 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-eb11-account-create-update-8g2gk"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.426697 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.429375 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.437042 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-eb11-account-create-update-8g2gk"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.437539 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmlb\" (UniqueName: \"kubernetes.io/projected/a05482d0-25f0-4382-bfdb-ff3053e44366-kube-api-access-8tmlb\") pod \"keystone-db-sync-5q8ws\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.448150 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ed0253-206d-47a8-bab2-31b8fc8cd69f-operator-scripts\") pod \"neutron-eb11-account-create-update-8g2gk\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.450493 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv27c\" (UniqueName: \"kubernetes.io/projected/60ed0253-206d-47a8-bab2-31b8fc8cd69f-kube-api-access-sv27c\") pod \"neutron-eb11-account-create-update-8g2gk\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.464556 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.502513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.552483 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ed0253-206d-47a8-bab2-31b8fc8cd69f-operator-scripts\") pod \"neutron-eb11-account-create-update-8g2gk\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.552601 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv27c\" (UniqueName: \"kubernetes.io/projected/60ed0253-206d-47a8-bab2-31b8fc8cd69f-kube-api-access-sv27c\") pod \"neutron-eb11-account-create-update-8g2gk\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.553426 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ed0253-206d-47a8-bab2-31b8fc8cd69f-operator-scripts\") pod \"neutron-eb11-account-create-update-8g2gk\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.558519 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.573486 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv27c\" (UniqueName: \"kubernetes.io/projected/60ed0253-206d-47a8-bab2-31b8fc8cd69f-kube-api-access-sv27c\") pod \"neutron-eb11-account-create-update-8g2gk\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.598087 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3bed-account-create-update-ffpcb"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.599581 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.602268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.606635 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3bed-account-create-update-ffpcb"] Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.654743 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64p4v\" (UniqueName: \"kubernetes.io/projected/879e9f18-d823-4818-b1f5-4c0d3da3afb7-kube-api-access-64p4v\") pod \"heat-3bed-account-create-update-ffpcb\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.654925 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879e9f18-d823-4818-b1f5-4c0d3da3afb7-operator-scripts\") pod \"heat-3bed-account-create-update-ffpcb\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.753844 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.755987 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64p4v\" (UniqueName: \"kubernetes.io/projected/879e9f18-d823-4818-b1f5-4c0d3da3afb7-kube-api-access-64p4v\") pod \"heat-3bed-account-create-update-ffpcb\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.756123 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879e9f18-d823-4818-b1f5-4c0d3da3afb7-operator-scripts\") pod \"heat-3bed-account-create-update-ffpcb\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.757064 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879e9f18-d823-4818-b1f5-4c0d3da3afb7-operator-scripts\") pod \"heat-3bed-account-create-update-ffpcb\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.774681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64p4v\" (UniqueName: \"kubernetes.io/projected/879e9f18-d823-4818-b1f5-4c0d3da3afb7-kube-api-access-64p4v\") pod \"heat-3bed-account-create-update-ffpcb\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:40 crc kubenswrapper[4705]: I0124 08:00:40.926080 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:46 crc kubenswrapper[4705]: E0124 08:00:46.339959 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 24 08:00:46 crc kubenswrapper[4705]: E0124 08:00:46.340415 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg94x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-zlxm7_openstack(d40a2abc-33c9-4284-af71-03fc828b92d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 08:00:46 crc kubenswrapper[4705]: E0124 08:00:46.341637 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-zlxm7" podUID="d40a2abc-33c9-4284-af71-03fc828b92d2" Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.533732 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.562393 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlgmx\" (UniqueName: \"kubernetes.io/projected/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-kube-api-access-zlgmx\") pod \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.563413 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-operator-scripts\") pod \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\" (UID: \"8e7afb13-2fb5-4520-acfc-d52cb558cd6c\") " Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.566384 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e7afb13-2fb5-4520-acfc-d52cb558cd6c" (UID: "8e7afb13-2fb5-4520-acfc-d52cb558cd6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.580285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-kube-api-access-zlgmx" (OuterVolumeSpecName: "kube-api-access-zlgmx") pod "8e7afb13-2fb5-4520-acfc-d52cb558cd6c" (UID: "8e7afb13-2fb5-4520-acfc-d52cb558cd6c"). InnerVolumeSpecName "kube-api-access-zlgmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.669692 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.669733 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlgmx\" (UniqueName: \"kubernetes.io/projected/8e7afb13-2fb5-4520-acfc-d52cb558cd6c-kube-api-access-zlgmx\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.839429 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fsqnk"] Jan 24 08:00:46 crc kubenswrapper[4705]: W0124 08:00:46.845694 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf637d189_c592_4ff2_96a3_8b001688a84f.slice/crio-61db9ced6dfb4ae1dfe6e0cc6fc08521f80112aaee615ea835da495be1bdef26 WatchSource:0}: Error finding container 61db9ced6dfb4ae1dfe6e0cc6fc08521f80112aaee615ea835da495be1bdef26: Status 404 returned error can't find the container with id 61db9ced6dfb4ae1dfe6e0cc6fc08521f80112aaee615ea835da495be1bdef26 Jan 24 08:00:46 crc kubenswrapper[4705]: I0124 08:00:46.941073 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b6a9-account-create-update-vlfsr"] Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.036162 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xx2zn"] Jan 24 08:00:47 crc kubenswrapper[4705]: W0124 08:00:47.047628 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c9a82f_9177_4a55_8059_6a498bbf927d.slice/crio-c79609cca89e889b3af44b0126e7663ad514e73890ce58b3882204476bec7eff WatchSource:0}: Error finding container c79609cca89e889b3af44b0126e7663ad514e73890ce58b3882204476bec7eff: Status 404 returned error can't find the container with id c79609cca89e889b3af44b0126e7663ad514e73890ce58b3882204476bec7eff Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.097371 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b6a9-account-create-update-vlfsr" event={"ID":"f5d096e4-c16f-4958-a1ea-d38cdecf18da","Type":"ContainerStarted","Data":"a4b64d22b5d09f49bb98a7b1c639dcb0f8c297d701ab3e252f5032e1379daba7"} Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.102917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fsqnk" event={"ID":"f637d189-c592-4ff2-96a3-8b001688a84f","Type":"ContainerStarted","Data":"7d9b1a058f2c8c327b13d3f8b8320f63e29c4eef91a6967af29b732ff2aefdf5"} Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.102974 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fsqnk" event={"ID":"f637d189-c592-4ff2-96a3-8b001688a84f","Type":"ContainerStarted","Data":"61db9ced6dfb4ae1dfe6e0cc6fc08521f80112aaee615ea835da495be1bdef26"} Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.111105 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qrbfv" event={"ID":"8e7afb13-2fb5-4520-acfc-d52cb558cd6c","Type":"ContainerDied","Data":"d753f39bce964afd48a7c776f2d15017fa49313a7a38610bf981e063c3defd93"} Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.111149 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d753f39bce964afd48a7c776f2d15017fa49313a7a38610bf981e063c3defd93" Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.111234 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrbfv" Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.121259 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xx2zn" event={"ID":"05c9a82f-9177-4a55-8059-6a498bbf927d","Type":"ContainerStarted","Data":"c79609cca89e889b3af44b0126e7663ad514e73890ce58b3882204476bec7eff"} Jan 24 08:00:47 crc kubenswrapper[4705]: E0124 08:00:47.128180 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-zlxm7" podUID="d40a2abc-33c9-4284-af71-03fc828b92d2" Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.130095 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-fsqnk" podStartSLOduration=8.130080705 podStartE2EDuration="8.130080705s" podCreationTimestamp="2026-01-24 08:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:00:47.128683626 +0000 UTC m=+1185.848556914" watchObservedRunningTime="2026-01-24 08:00:47.130080705 +0000 UTC m=+1185.849953993" Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.213146 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5q8ws"] Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.219802 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0742-account-create-update-sh6tr"] Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.226253 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-eb11-account-create-update-8g2gk"] Jan 24 08:00:47 crc kubenswrapper[4705]: W0124 08:00:47.232848 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05482d0_25f0_4382_bfdb_ff3053e44366.slice/crio-79d37fe34c49c28751c27711951dc524eba6552f3359af0005d1c9c20b41ce99 WatchSource:0}: Error finding container 79d37fe34c49c28751c27711951dc524eba6552f3359af0005d1c9c20b41ce99: Status 404 returned error can't find the container with id 79d37fe34c49c28751c27711951dc524eba6552f3359af0005d1c9c20b41ce99 Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.235572 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lmg2v"] Jan 24 08:00:47 crc kubenswrapper[4705]: W0124 08:00:47.240268 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod879e9f18_d823_4818_b1f5_4c0d3da3afb7.slice/crio-cca671f4f154675623a931f195fa558863666e6f1b2f6a17f12626809807028a WatchSource:0}: Error finding container cca671f4f154675623a931f195fa558863666e6f1b2f6a17f12626809807028a: Status 404 returned error can't find the container with id cca671f4f154675623a931f195fa558863666e6f1b2f6a17f12626809807028a Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.246943 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3bed-account-create-update-ffpcb"] Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.403072 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4skzp"] Jan 24 08:00:47 crc kubenswrapper[4705]: I0124 08:00:47.496543 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 08:00:47 crc kubenswrapper[4705]: W0124 08:00:47.504277 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2521bbad_8785_4fbf_94fe_7309e9fe3442.slice/crio-cd03778440a6b2b57e1ac8e4418f0f19af7328e153e2c9f3332b948f4889fccd WatchSource:0}: Error finding container cd03778440a6b2b57e1ac8e4418f0f19af7328e153e2c9f3332b948f4889fccd: Status 404 returned error can't find the container with id cd03778440a6b2b57e1ac8e4418f0f19af7328e153e2c9f3332b948f4889fccd Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.131602 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5d096e4-c16f-4958-a1ea-d38cdecf18da" containerID="bbb57f5549c222454dec74a7e21afda9954e91eac4862ab766fbe53075156914" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.131665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b6a9-account-create-update-vlfsr" event={"ID":"f5d096e4-c16f-4958-a1ea-d38cdecf18da","Type":"ContainerDied","Data":"bbb57f5549c222454dec74a7e21afda9954e91eac4862ab766fbe53075156914"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.132870 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5q8ws" event={"ID":"a05482d0-25f0-4382-bfdb-ff3053e44366","Type":"ContainerStarted","Data":"79d37fe34c49c28751c27711951dc524eba6552f3359af0005d1c9c20b41ce99"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.135436 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"cd03778440a6b2b57e1ac8e4418f0f19af7328e153e2c9f3332b948f4889fccd"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.137773 4705 generic.go:334] "Generic (PLEG): container finished" podID="05c9a82f-9177-4a55-8059-6a498bbf927d" containerID="f5eb79a7e1ec5e8375286109f00486d904c8076d80c27babf32f01d8b8b435f9" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.137880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xx2zn" event={"ID":"05c9a82f-9177-4a55-8059-6a498bbf927d","Type":"ContainerDied","Data":"f5eb79a7e1ec5e8375286109f00486d904c8076d80c27babf32f01d8b8b435f9"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.139244 4705 generic.go:334] "Generic (PLEG): container finished" podID="0931e94d-dcf4-447e-bbda-071ae0b176ec" containerID="12e6408405394c890f396ca13b08fc9d4550c57b6792790874de3b67b6a062b5" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.139293 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0742-account-create-update-sh6tr" event={"ID":"0931e94d-dcf4-447e-bbda-071ae0b176ec","Type":"ContainerDied","Data":"12e6408405394c890f396ca13b08fc9d4550c57b6792790874de3b67b6a062b5"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.139317 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0742-account-create-update-sh6tr" event={"ID":"0931e94d-dcf4-447e-bbda-071ae0b176ec","Type":"ContainerStarted","Data":"4d32bd63bf22aaac67e668413b544c36e40b65328b7813079dfb317c6e8ca702"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.140693 4705 generic.go:334] "Generic (PLEG): container finished" podID="60ed0253-206d-47a8-bab2-31b8fc8cd69f" containerID="3ff88df58b421fefe62256636e9e8d3049c7debc047730a79e69083289be5b51" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.140741 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eb11-account-create-update-8g2gk" event={"ID":"60ed0253-206d-47a8-bab2-31b8fc8cd69f","Type":"ContainerDied","Data":"3ff88df58b421fefe62256636e9e8d3049c7debc047730a79e69083289be5b51"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.140758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eb11-account-create-update-8g2gk" event={"ID":"60ed0253-206d-47a8-bab2-31b8fc8cd69f","Type":"ContainerStarted","Data":"bf1484ecc6ba8d09036f20a6ca05be2165aa648be00f9a5a1d7ec62b630c9d99"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.142289 4705 generic.go:334] "Generic (PLEG): container finished" podID="879e9f18-d823-4818-b1f5-4c0d3da3afb7" containerID="e59acc1d99628313da7f48a7709ab1201cbdd0b675e5c3d6cc074bff4ef2a728" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.142338 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3bed-account-create-update-ffpcb" event={"ID":"879e9f18-d823-4818-b1f5-4c0d3da3afb7","Type":"ContainerDied","Data":"e59acc1d99628313da7f48a7709ab1201cbdd0b675e5c3d6cc074bff4ef2a728"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.142358 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3bed-account-create-update-ffpcb" event={"ID":"879e9f18-d823-4818-b1f5-4c0d3da3afb7","Type":"ContainerStarted","Data":"cca671f4f154675623a931f195fa558863666e6f1b2f6a17f12626809807028a"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.157366 4705 generic.go:334] "Generic (PLEG): container finished" podID="f637d189-c592-4ff2-96a3-8b001688a84f" containerID="7d9b1a058f2c8c327b13d3f8b8320f63e29c4eef91a6967af29b732ff2aefdf5" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.157454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fsqnk" event={"ID":"f637d189-c592-4ff2-96a3-8b001688a84f","Type":"ContainerDied","Data":"7d9b1a058f2c8c327b13d3f8b8320f63e29c4eef91a6967af29b732ff2aefdf5"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.160563 4705 generic.go:334] "Generic (PLEG): container finished" podID="6564fea8-f9ad-47ed-90fa-1d08616f0b60" containerID="8afc5f9d1ff323d3f4539a1a9882c6116af010c40b76213a8c26d7211ac432eb" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.160615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lmg2v" event={"ID":"6564fea8-f9ad-47ed-90fa-1d08616f0b60","Type":"ContainerDied","Data":"8afc5f9d1ff323d3f4539a1a9882c6116af010c40b76213a8c26d7211ac432eb"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.160638 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lmg2v" event={"ID":"6564fea8-f9ad-47ed-90fa-1d08616f0b60","Type":"ContainerStarted","Data":"e3d82a9b6c773521641d2e5da293963635892491dec6db63203350d658021328"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.162760 4705 generic.go:334] "Generic (PLEG): container finished" podID="28b7cacf-32a2-48d0-af65-162d7b360d89" containerID="af7f126f9c2f951196fe112213bc9a8e3a98c84d02caa118ecd7a55175838338" exitCode=0 Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.162850 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4skzp" event={"ID":"28b7cacf-32a2-48d0-af65-162d7b360d89","Type":"ContainerDied","Data":"af7f126f9c2f951196fe112213bc9a8e3a98c84d02caa118ecd7a55175838338"} Jan 24 08:00:48 crc kubenswrapper[4705]: I0124 08:00:48.162883 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4skzp" event={"ID":"28b7cacf-32a2-48d0-af65-162d7b360d89","Type":"ContainerStarted","Data":"e678459199215ae9675336a8adbcc2e30462fc582188ff5067d0625c056657b9"} Jan 24 08:00:49 crc kubenswrapper[4705]: I0124 08:00:49.175233 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"124375b9f8db3ac8e8427e88fcffbba319c6fd18e5936038bc6095bf5649a7fc"} Jan 24 08:00:49 crc kubenswrapper[4705]: I0124 08:00:49.175973 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"4bf3582869b3e1f06a9c86eb0fb277b775908c77463271f826760b889ae76ae1"} Jan 24 08:00:49 crc kubenswrapper[4705]: I0124 08:00:49.175989 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"03f09bd0513256f9049f322eb5d3ce78cbc65e3fc37d214867f42e0b7d7097d6"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.129384 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.164365 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.174811 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.202772 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.208171 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87j7j\" (UniqueName: \"kubernetes.io/projected/05c9a82f-9177-4a55-8059-6a498bbf927d-kube-api-access-87j7j\") pod \"05c9a82f-9177-4a55-8059-6a498bbf927d\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.208246 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6564fea8-f9ad-47ed-90fa-1d08616f0b60-operator-scripts\") pod \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.208292 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c9a82f-9177-4a55-8059-6a498bbf927d-operator-scripts\") pod \"05c9a82f-9177-4a55-8059-6a498bbf927d\" (UID: \"05c9a82f-9177-4a55-8059-6a498bbf927d\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.208444 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pswv\" (UniqueName: \"kubernetes.io/projected/6564fea8-f9ad-47ed-90fa-1d08616f0b60-kube-api-access-2pswv\") pod \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\" (UID: \"6564fea8-f9ad-47ed-90fa-1d08616f0b60\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.209992 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6564fea8-f9ad-47ed-90fa-1d08616f0b60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6564fea8-f9ad-47ed-90fa-1d08616f0b60" (UID: "6564fea8-f9ad-47ed-90fa-1d08616f0b60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.210112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c9a82f-9177-4a55-8059-6a498bbf927d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05c9a82f-9177-4a55-8059-6a498bbf927d" (UID: "05c9a82f-9177-4a55-8059-6a498bbf927d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.215746 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c9a82f-9177-4a55-8059-6a498bbf927d-kube-api-access-87j7j" (OuterVolumeSpecName: "kube-api-access-87j7j") pod "05c9a82f-9177-4a55-8059-6a498bbf927d" (UID: "05c9a82f-9177-4a55-8059-6a498bbf927d"). InnerVolumeSpecName "kube-api-access-87j7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.218123 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6564fea8-f9ad-47ed-90fa-1d08616f0b60-kube-api-access-2pswv" (OuterVolumeSpecName: "kube-api-access-2pswv") pod "6564fea8-f9ad-47ed-90fa-1d08616f0b60" (UID: "6564fea8-f9ad-47ed-90fa-1d08616f0b60"). InnerVolumeSpecName "kube-api-access-2pswv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.219331 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.219400 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xx2zn" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.219405 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xx2zn" event={"ID":"05c9a82f-9177-4a55-8059-6a498bbf927d","Type":"ContainerDied","Data":"c79609cca89e889b3af44b0126e7663ad514e73890ce58b3882204476bec7eff"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.219774 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c79609cca89e889b3af44b0126e7663ad514e73890ce58b3882204476bec7eff" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.221546 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0742-account-create-update-sh6tr" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.221570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0742-account-create-update-sh6tr" event={"ID":"0931e94d-dcf4-447e-bbda-071ae0b176ec","Type":"ContainerDied","Data":"4d32bd63bf22aaac67e668413b544c36e40b65328b7813079dfb317c6e8ca702"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.221634 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d32bd63bf22aaac67e668413b544c36e40b65328b7813079dfb317c6e8ca702" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.223701 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eb11-account-create-update-8g2gk" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.223713 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eb11-account-create-update-8g2gk" event={"ID":"60ed0253-206d-47a8-bab2-31b8fc8cd69f","Type":"ContainerDied","Data":"bf1484ecc6ba8d09036f20a6ca05be2165aa648be00f9a5a1d7ec62b630c9d99"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.223749 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf1484ecc6ba8d09036f20a6ca05be2165aa648be00f9a5a1d7ec62b630c9d99" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.234350 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3bed-account-create-update-ffpcb" event={"ID":"879e9f18-d823-4818-b1f5-4c0d3da3afb7","Type":"ContainerDied","Data":"cca671f4f154675623a931f195fa558863666e6f1b2f6a17f12626809807028a"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.234387 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cca671f4f154675623a931f195fa558863666e6f1b2f6a17f12626809807028a" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.234442 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bed-account-create-update-ffpcb" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.237420 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b6a9-account-create-update-vlfsr" event={"ID":"f5d096e4-c16f-4958-a1ea-d38cdecf18da","Type":"ContainerDied","Data":"a4b64d22b5d09f49bb98a7b1c639dcb0f8c297d701ab3e252f5032e1379daba7"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.237458 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4b64d22b5d09f49bb98a7b1c639dcb0f8c297d701ab3e252f5032e1379daba7" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.240334 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fsqnk" event={"ID":"f637d189-c592-4ff2-96a3-8b001688a84f","Type":"ContainerDied","Data":"61db9ced6dfb4ae1dfe6e0cc6fc08521f80112aaee615ea835da495be1bdef26"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.240356 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61db9ced6dfb4ae1dfe6e0cc6fc08521f80112aaee615ea835da495be1bdef26" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.246055 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lmg2v" event={"ID":"6564fea8-f9ad-47ed-90fa-1d08616f0b60","Type":"ContainerDied","Data":"e3d82a9b6c773521641d2e5da293963635892491dec6db63203350d658021328"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.246081 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d82a9b6c773521641d2e5da293963635892491dec6db63203350d658021328" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.246122 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lmg2v" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.253594 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4skzp" event={"ID":"28b7cacf-32a2-48d0-af65-162d7b360d89","Type":"ContainerDied","Data":"e678459199215ae9675336a8adbcc2e30462fc582188ff5067d0625c056657b9"} Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.253629 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e678459199215ae9675336a8adbcc2e30462fc582188ff5067d0625c056657b9" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.291368 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.310141 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879e9f18-d823-4818-b1f5-4c0d3da3afb7-operator-scripts\") pod \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.310399 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtwr8\" (UniqueName: \"kubernetes.io/projected/0931e94d-dcf4-447e-bbda-071ae0b176ec-kube-api-access-rtwr8\") pod \"0931e94d-dcf4-447e-bbda-071ae0b176ec\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.310472 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0931e94d-dcf4-447e-bbda-071ae0b176ec-operator-scripts\") pod \"0931e94d-dcf4-447e-bbda-071ae0b176ec\" (UID: \"0931e94d-dcf4-447e-bbda-071ae0b176ec\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.310570 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ed0253-206d-47a8-bab2-31b8fc8cd69f-operator-scripts\") pod \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.310661 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64p4v\" (UniqueName: \"kubernetes.io/projected/879e9f18-d823-4818-b1f5-4c0d3da3afb7-kube-api-access-64p4v\") pod \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\" (UID: \"879e9f18-d823-4818-b1f5-4c0d3da3afb7\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.310759 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv27c\" (UniqueName: \"kubernetes.io/projected/60ed0253-206d-47a8-bab2-31b8fc8cd69f-kube-api-access-sv27c\") pod \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\" (UID: \"60ed0253-206d-47a8-bab2-31b8fc8cd69f\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.311588 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pswv\" (UniqueName: \"kubernetes.io/projected/6564fea8-f9ad-47ed-90fa-1d08616f0b60-kube-api-access-2pswv\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.311610 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87j7j\" (UniqueName: \"kubernetes.io/projected/05c9a82f-9177-4a55-8059-6a498bbf927d-kube-api-access-87j7j\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.311623 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6564fea8-f9ad-47ed-90fa-1d08616f0b60-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.311658 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c9a82f-9177-4a55-8059-6a498bbf927d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.312223 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ed0253-206d-47a8-bab2-31b8fc8cd69f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60ed0253-206d-47a8-bab2-31b8fc8cd69f" (UID: "60ed0253-206d-47a8-bab2-31b8fc8cd69f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.312641 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879e9f18-d823-4818-b1f5-4c0d3da3afb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "879e9f18-d823-4818-b1f5-4c0d3da3afb7" (UID: "879e9f18-d823-4818-b1f5-4c0d3da3afb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.314183 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0931e94d-dcf4-447e-bbda-071ae0b176ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0931e94d-dcf4-447e-bbda-071ae0b176ec" (UID: "0931e94d-dcf4-447e-bbda-071ae0b176ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.314621 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0931e94d-dcf4-447e-bbda-071ae0b176ec-kube-api-access-rtwr8" (OuterVolumeSpecName: "kube-api-access-rtwr8") pod "0931e94d-dcf4-447e-bbda-071ae0b176ec" (UID: "0931e94d-dcf4-447e-bbda-071ae0b176ec"). InnerVolumeSpecName "kube-api-access-rtwr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.318570 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ed0253-206d-47a8-bab2-31b8fc8cd69f-kube-api-access-sv27c" (OuterVolumeSpecName: "kube-api-access-sv27c") pod "60ed0253-206d-47a8-bab2-31b8fc8cd69f" (UID: "60ed0253-206d-47a8-bab2-31b8fc8cd69f"). InnerVolumeSpecName "kube-api-access-sv27c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.320402 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879e9f18-d823-4818-b1f5-4c0d3da3afb7-kube-api-access-64p4v" (OuterVolumeSpecName: "kube-api-access-64p4v") pod "879e9f18-d823-4818-b1f5-4c0d3da3afb7" (UID: "879e9f18-d823-4818-b1f5-4c0d3da3afb7"). InnerVolumeSpecName "kube-api-access-64p4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.320747 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.320885 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413173 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lblwb\" (UniqueName: \"kubernetes.io/projected/28b7cacf-32a2-48d0-af65-162d7b360d89-kube-api-access-lblwb\") pod \"28b7cacf-32a2-48d0-af65-162d7b360d89\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413229 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5d096e4-c16f-4958-a1ea-d38cdecf18da-operator-scripts\") pod \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413281 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637d189-c592-4ff2-96a3-8b001688a84f-operator-scripts\") pod \"f637d189-c592-4ff2-96a3-8b001688a84f\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413308 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nplvc\" (UniqueName: \"kubernetes.io/projected/f637d189-c592-4ff2-96a3-8b001688a84f-kube-api-access-nplvc\") pod \"f637d189-c592-4ff2-96a3-8b001688a84f\" (UID: \"f637d189-c592-4ff2-96a3-8b001688a84f\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413419 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b7cacf-32a2-48d0-af65-162d7b360d89-operator-scripts\") pod \"28b7cacf-32a2-48d0-af65-162d7b360d89\" (UID: \"28b7cacf-32a2-48d0-af65-162d7b360d89\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413447 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vx9x\" (UniqueName: \"kubernetes.io/projected/f5d096e4-c16f-4958-a1ea-d38cdecf18da-kube-api-access-5vx9x\") pod \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\" (UID: \"f5d096e4-c16f-4958-a1ea-d38cdecf18da\") " Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413730 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5d096e4-c16f-4958-a1ea-d38cdecf18da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5d096e4-c16f-4958-a1ea-d38cdecf18da" (UID: "f5d096e4-c16f-4958-a1ea-d38cdecf18da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.413746 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f637d189-c592-4ff2-96a3-8b001688a84f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f637d189-c592-4ff2-96a3-8b001688a84f" (UID: "f637d189-c592-4ff2-96a3-8b001688a84f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414214 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28b7cacf-32a2-48d0-af65-162d7b360d89-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28b7cacf-32a2-48d0-af65-162d7b360d89" (UID: "28b7cacf-32a2-48d0-af65-162d7b360d89"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414306 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtwr8\" (UniqueName: \"kubernetes.io/projected/0931e94d-dcf4-447e-bbda-071ae0b176ec-kube-api-access-rtwr8\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414326 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0931e94d-dcf4-447e-bbda-071ae0b176ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414335 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ed0253-206d-47a8-bab2-31b8fc8cd69f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414344 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64p4v\" (UniqueName: \"kubernetes.io/projected/879e9f18-d823-4818-b1f5-4c0d3da3afb7-kube-api-access-64p4v\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414354 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv27c\" (UniqueName: \"kubernetes.io/projected/60ed0253-206d-47a8-bab2-31b8fc8cd69f-kube-api-access-sv27c\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414365 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5d096e4-c16f-4958-a1ea-d38cdecf18da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414375 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879e9f18-d823-4818-b1f5-4c0d3da3afb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.414384 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637d189-c592-4ff2-96a3-8b001688a84f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.416074 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b7cacf-32a2-48d0-af65-162d7b360d89-kube-api-access-lblwb" (OuterVolumeSpecName: "kube-api-access-lblwb") pod "28b7cacf-32a2-48d0-af65-162d7b360d89" (UID: "28b7cacf-32a2-48d0-af65-162d7b360d89"). InnerVolumeSpecName "kube-api-access-lblwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.416610 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f637d189-c592-4ff2-96a3-8b001688a84f-kube-api-access-nplvc" (OuterVolumeSpecName: "kube-api-access-nplvc") pod "f637d189-c592-4ff2-96a3-8b001688a84f" (UID: "f637d189-c592-4ff2-96a3-8b001688a84f"). InnerVolumeSpecName "kube-api-access-nplvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.417102 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d096e4-c16f-4958-a1ea-d38cdecf18da-kube-api-access-5vx9x" (OuterVolumeSpecName: "kube-api-access-5vx9x") pod "f5d096e4-c16f-4958-a1ea-d38cdecf18da" (UID: "f5d096e4-c16f-4958-a1ea-d38cdecf18da"). InnerVolumeSpecName "kube-api-access-5vx9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.516173 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lblwb\" (UniqueName: \"kubernetes.io/projected/28b7cacf-32a2-48d0-af65-162d7b360d89-kube-api-access-lblwb\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.516724 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nplvc\" (UniqueName: \"kubernetes.io/projected/f637d189-c592-4ff2-96a3-8b001688a84f-kube-api-access-nplvc\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.516833 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28b7cacf-32a2-48d0-af65-162d7b360d89-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:53 crc kubenswrapper[4705]: I0124 08:00:53.516921 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vx9x\" (UniqueName: \"kubernetes.io/projected/f5d096e4-c16f-4958-a1ea-d38cdecf18da-kube-api-access-5vx9x\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:54 crc kubenswrapper[4705]: I0124 08:00:54.546730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5q8ws" event={"ID":"a05482d0-25f0-4382-bfdb-ff3053e44366","Type":"ContainerStarted","Data":"ef74a14b98b649a23a658ea9995acca22723396d3eafd55eecb16f728f04d4b2"} Jan 24 08:00:54 crc kubenswrapper[4705]: I0124 08:00:54.563647 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fsqnk" Jan 24 08:00:54 crc kubenswrapper[4705]: I0124 08:00:54.564683 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"cdda8c5e19a8fcbeb17812b2549cd3cb8675be934a2ecf8d2478cf637f561fa2"} Jan 24 08:00:54 crc kubenswrapper[4705]: I0124 08:00:54.567646 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4skzp" Jan 24 08:00:54 crc kubenswrapper[4705]: I0124 08:00:54.582398 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b6a9-account-create-update-vlfsr" Jan 24 08:00:54 crc kubenswrapper[4705]: I0124 08:00:54.608485 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5q8ws" podStartSLOduration=8.915315335 podStartE2EDuration="14.608465689s" podCreationTimestamp="2026-01-24 08:00:40 +0000 UTC" firstStartedPulling="2026-01-24 08:00:47.237891655 +0000 UTC m=+1185.957764943" lastFinishedPulling="2026-01-24 08:00:52.931042009 +0000 UTC m=+1191.650915297" observedRunningTime="2026-01-24 08:00:54.575906829 +0000 UTC m=+1193.295780137" watchObservedRunningTime="2026-01-24 08:00:54.608465689 +0000 UTC m=+1193.328338977" Jan 24 08:00:55 crc kubenswrapper[4705]: I0124 08:00:55.608292 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"4f700f10ae46a6cd723908f31ded77abdd461e27b9743ec023ee815c83b06bbd"} Jan 24 08:00:55 crc kubenswrapper[4705]: I0124 08:00:55.608594 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"3712d72bf51e10909e1ea50b920b5780748b0fdfd1cd0e435f5fe4ea964f0b53"} Jan 24 08:00:56 crc kubenswrapper[4705]: I0124 08:00:56.652344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"927ff39ef875dd0ffff75af1bad6827d6ad6cba5d54401dbb8ac66307c1d2302"} Jan 24 08:00:56 crc kubenswrapper[4705]: I0124 08:00:56.652389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"be025feb383e0ced90a1b7655110e4671e2ee9bb4c63cf9d1e44f0523adf687a"} Jan 24 08:00:57 crc kubenswrapper[4705]: I0124 08:00:57.669165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"aa72464dd043abd2473aee2cd98ab8602d125ac018496f39f42d5b5cfb6a00de"} Jan 24 08:00:57 crc kubenswrapper[4705]: I0124 08:00:57.670411 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"9cbb9ced9f529835b809788da067bc7626e7b482198c5b97fa7b767b67b4bbe2"} Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.692342 4705 generic.go:334] "Generic (PLEG): container finished" podID="a05482d0-25f0-4382-bfdb-ff3053e44366" containerID="ef74a14b98b649a23a658ea9995acca22723396d3eafd55eecb16f728f04d4b2" exitCode=0 Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.692451 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5q8ws" event={"ID":"a05482d0-25f0-4382-bfdb-ff3053e44366","Type":"ContainerDied","Data":"ef74a14b98b649a23a658ea9995acca22723396d3eafd55eecb16f728f04d4b2"} Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.705048 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"4639f19498c017ff1dbd41b1649b03d2720bf74d87765661a1aaab863fa223d7"} Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.705095 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"8bebd7d96ae548ea6e078b118ac174074adcc907c0ff02a3b8b14f74f645e935"} Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.705117 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"e8337f11c25e5669a70ce610ff133e12c772bc2765076de3f995f729fa2d63ab"} Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.705129 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"27044e4e504f7fd3d113ce54e0da4621202a72ca397492ce97bcf43608fab0d2"} Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.705140 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2521bbad-8785-4fbf-94fe-7309e9fe3442","Type":"ContainerStarted","Data":"fd27dfdffe54a1333121da459a06e5fdbd93321491ca501f2e9549a466fab625"} Jan 24 08:00:58 crc kubenswrapper[4705]: I0124 08:00:58.758427 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=46.146124573 podStartE2EDuration="55.758410091s" podCreationTimestamp="2026-01-24 08:00:03 +0000 UTC" firstStartedPulling="2026-01-24 08:00:47.50687596 +0000 UTC m=+1186.226749238" lastFinishedPulling="2026-01-24 08:00:57.119161478 +0000 UTC m=+1195.839034756" observedRunningTime="2026-01-24 08:00:58.75121988 +0000 UTC m=+1197.471093198" watchObservedRunningTime="2026-01-24 08:00:58.758410091 +0000 UTC m=+1197.478283379" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034243 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-ln5nw"] Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034642 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f637d189-c592-4ff2-96a3-8b001688a84f" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034663 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f637d189-c592-4ff2-96a3-8b001688a84f" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034674 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7afb13-2fb5-4520-acfc-d52cb558cd6c" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034681 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7afb13-2fb5-4520-acfc-d52cb558cd6c" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034690 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879e9f18-d823-4818-b1f5-4c0d3da3afb7" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034698 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="879e9f18-d823-4818-b1f5-4c0d3da3afb7" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034708 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6564fea8-f9ad-47ed-90fa-1d08616f0b60" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034714 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6564fea8-f9ad-47ed-90fa-1d08616f0b60" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034724 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ed0253-206d-47a8-bab2-31b8fc8cd69f" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034730 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ed0253-206d-47a8-bab2-31b8fc8cd69f" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034739 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d096e4-c16f-4958-a1ea-d38cdecf18da" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034744 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d096e4-c16f-4958-a1ea-d38cdecf18da" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034757 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0931e94d-dcf4-447e-bbda-071ae0b176ec" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034764 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0931e94d-dcf4-447e-bbda-071ae0b176ec" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034777 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c9a82f-9177-4a55-8059-6a498bbf927d" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034784 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c9a82f-9177-4a55-8059-6a498bbf927d" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: E0124 08:00:59.034797 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b7cacf-32a2-48d0-af65-162d7b360d89" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034803 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b7cacf-32a2-48d0-af65-162d7b360d89" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034963 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7afb13-2fb5-4520-acfc-d52cb558cd6c" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034984 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b7cacf-32a2-48d0-af65-162d7b360d89" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034991 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="879e9f18-d823-4818-b1f5-4c0d3da3afb7" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.034999 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c9a82f-9177-4a55-8059-6a498bbf927d" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.035010 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ed0253-206d-47a8-bab2-31b8fc8cd69f" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.035017 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d096e4-c16f-4958-a1ea-d38cdecf18da" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.035027 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f637d189-c592-4ff2-96a3-8b001688a84f" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.035037 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0931e94d-dcf4-447e-bbda-071ae0b176ec" containerName="mariadb-account-create-update" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.035043 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6564fea8-f9ad-47ed-90fa-1d08616f0b60" containerName="mariadb-database-create" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.035900 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.037705 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.049670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-ln5nw"] Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.158532 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-config\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.158615 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.158669 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.158725 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.158750 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-kube-api-access-bzlds\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.158846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.260670 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.260756 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-config\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.260801 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.260872 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.260930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.260958 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-kube-api-access-bzlds\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.261784 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.261787 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.262016 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.262078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-config\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.262080 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.278723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-kube-api-access-bzlds\") pod \"dnsmasq-dns-5c79d794d7-ln5nw\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.352300 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:00:59 crc kubenswrapper[4705]: I0124 08:00:59.851468 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-ln5nw"] Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.008188 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.074787 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmlb\" (UniqueName: \"kubernetes.io/projected/a05482d0-25f0-4382-bfdb-ff3053e44366-kube-api-access-8tmlb\") pod \"a05482d0-25f0-4382-bfdb-ff3053e44366\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.074857 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-combined-ca-bundle\") pod \"a05482d0-25f0-4382-bfdb-ff3053e44366\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.074942 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-config-data\") pod \"a05482d0-25f0-4382-bfdb-ff3053e44366\" (UID: \"a05482d0-25f0-4382-bfdb-ff3053e44366\") " Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.122857 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05482d0-25f0-4382-bfdb-ff3053e44366-kube-api-access-8tmlb" (OuterVolumeSpecName: "kube-api-access-8tmlb") pod "a05482d0-25f0-4382-bfdb-ff3053e44366" (UID: "a05482d0-25f0-4382-bfdb-ff3053e44366"). InnerVolumeSpecName "kube-api-access-8tmlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.145935 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a05482d0-25f0-4382-bfdb-ff3053e44366" (UID: "a05482d0-25f0-4382-bfdb-ff3053e44366"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.174698 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-config-data" (OuterVolumeSpecName: "config-data") pod "a05482d0-25f0-4382-bfdb-ff3053e44366" (UID: "a05482d0-25f0-4382-bfdb-ff3053e44366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.177846 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.177872 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmlb\" (UniqueName: \"kubernetes.io/projected/a05482d0-25f0-4382-bfdb-ff3053e44366-kube-api-access-8tmlb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.177883 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05482d0-25f0-4382-bfdb-ff3053e44366-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.736597 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5q8ws" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.736639 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5q8ws" event={"ID":"a05482d0-25f0-4382-bfdb-ff3053e44366","Type":"ContainerDied","Data":"79d37fe34c49c28751c27711951dc524eba6552f3359af0005d1c9c20b41ce99"} Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.737952 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d37fe34c49c28751c27711951dc524eba6552f3359af0005d1c9c20b41ce99" Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.742860 4705 generic.go:334] "Generic (PLEG): container finished" podID="7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" containerID="cbd81f0d5ec9e58a2d1fc37d8df6ebac06e938501bd0eff495a4d61ccdef7a38" exitCode=0 Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.742902 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" event={"ID":"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96","Type":"ContainerDied","Data":"cbd81f0d5ec9e58a2d1fc37d8df6ebac06e938501bd0eff495a4d61ccdef7a38"} Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.742933 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" event={"ID":"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96","Type":"ContainerStarted","Data":"15c80fdd35aa2ba693de891795415e68fd62d0940628f93d9565b3bbd1fade48"} Jan 24 08:01:00 crc kubenswrapper[4705]: I0124 08:01:00.943239 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-ln5nw"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:00.999549 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-85lsj"] Jan 24 08:01:01 crc kubenswrapper[4705]: E0124 08:01:01.000139 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05482d0-25f0-4382-bfdb-ff3053e44366" containerName="keystone-db-sync" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.000154 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05482d0-25f0-4382-bfdb-ff3053e44366" containerName="keystone-db-sync" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.000328 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05482d0-25f0-4382-bfdb-ff3053e44366" containerName="keystone-db-sync" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.000984 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.006256 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.006482 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.006767 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.007016 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gs7v" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.007240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.015459 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-hpxs2"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.017333 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.036899 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-85lsj"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.056474 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-hpxs2"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.099903 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-tdcf8"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100312 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-scripts\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100463 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100546 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-credential-keys\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100579 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100631 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-config\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100656 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-svc\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100706 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-config-data\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100776 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/1fac1504-770d-459e-ba4e-59970bc41413-kube-api-access-5gsnq\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.100814 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m786k\" (UniqueName: \"kubernetes.io/projected/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-kube-api-access-m786k\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.101023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-combined-ca-bundle\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.101048 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-fernet-keys\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.101102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.102195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.106946 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.107331 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-scbxs" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.169245 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tdcf8"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.202316 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.202693 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-credential-keys\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.202743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.202763 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-config\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.202800 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-svc\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.202887 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-config-data\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.202934 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/1fac1504-770d-459e-ba4e-59970bc41413-kube-api-access-5gsnq\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203025 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-config-data\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203057 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m786k\" (UniqueName: \"kubernetes.io/projected/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-kube-api-access-m786k\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203123 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-combined-ca-bundle\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203147 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-fernet-keys\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203192 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203218 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-combined-ca-bundle\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203281 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpddl\" (UniqueName: \"kubernetes.io/projected/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-kube-api-access-bpddl\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.203344 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-scripts\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.204091 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.204869 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-svc\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.205646 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-config\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.205815 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.210158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-scripts\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.226578 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.231156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-credential-keys\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.231218 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-combined-ca-bundle\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.235603 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-config-data\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.240461 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/1fac1504-770d-459e-ba4e-59970bc41413-kube-api-access-5gsnq\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.249978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-fernet-keys\") pod \"keystone-bootstrap-85lsj\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: E0124 08:01:01.259410 4705 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 24 08:01:01 crc kubenswrapper[4705]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 24 08:01:01 crc kubenswrapper[4705]: > podSandboxID="15c80fdd35aa2ba693de891795415e68fd62d0940628f93d9565b3bbd1fade48" Jan 24 08:01:01 crc kubenswrapper[4705]: E0124 08:01:01.259605 4705 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 24 08:01:01 crc kubenswrapper[4705]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97h57bh654h659h5b6hbfhc4h689h565h578h56ch8dh8bh67fhf7h5f8hc7h5d4h5d5h5f7h687h5cbh5c5h5d8h68fh669h588h59bh5c6h674h5c8h5d7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzlds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5c79d794d7-ln5nw_openstack(7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 24 08:01:01 crc kubenswrapper[4705]: > logger="UnhandledError" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.260594 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m786k\" (UniqueName: \"kubernetes.io/projected/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-kube-api-access-m786k\") pod \"dnsmasq-dns-5b868669f-hpxs2\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: E0124 08:01:01.260662 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" podUID="7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.269057 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-lvscj"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.271187 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.275043 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.275375 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xc858" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.275540 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.300199 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lvscj"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.304902 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-config-data\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.305024 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-combined-ca-bundle\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.305072 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpddl\" (UniqueName: \"kubernetes.io/projected/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-kube-api-access-bpddl\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.309448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-config-data\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.310648 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-combined-ca-bundle\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.350241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpddl\" (UniqueName: \"kubernetes.io/projected/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-kube-api-access-bpddl\") pod \"heat-db-sync-tdcf8\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.359358 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.372047 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jrq4n"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.373054 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.373517 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.376517 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nsp8c" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.376756 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.392652 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jrq4n"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.406945 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2n2\" (UniqueName: \"kubernetes.io/projected/fd93ff70-0f51-4af8-9a10-6407f4901667-kube-api-access-8s2n2\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.407096 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-combined-ca-bundle\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.407132 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-scripts\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.407177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-config-data\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.407221 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd93ff70-0f51-4af8-9a10-6407f4901667-etc-machine-id\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.407268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-db-sync-config-data\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.439420 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.522444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-combined-ca-bundle\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.522487 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-scripts\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.522532 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-config-data\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.522562 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd93ff70-0f51-4af8-9a10-6407f4901667-etc-machine-id\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.522600 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-db-sync-config-data\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.522634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2n2\" (UniqueName: \"kubernetes.io/projected/fd93ff70-0f51-4af8-9a10-6407f4901667-kube-api-access-8s2n2\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.527665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-combined-ca-bundle\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.528179 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd93ff70-0f51-4af8-9a10-6407f4901667-etc-machine-id\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.529738 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-scripts\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.531716 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-db-sync-config-data\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.532252 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-config-data\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.629798 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-db-sync-config-data\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.630242 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-combined-ca-bundle\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.630333 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v856d\" (UniqueName: \"kubernetes.io/projected/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-kube-api-access-v856d\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.652876 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2n2\" (UniqueName: \"kubernetes.io/projected/fd93ff70-0f51-4af8-9a10-6407f4901667-kube-api-access-8s2n2\") pod \"cinder-db-sync-lvscj\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.689405 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-x9rjl"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.691279 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x9rjl"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.691425 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.726104 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zwff5" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.727777 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.731589 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.733378 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-db-sync-config-data\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.733431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-combined-ca-bundle\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.733503 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v856d\" (UniqueName: \"kubernetes.io/projected/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-kube-api-access-v856d\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.742257 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-combined-ca-bundle\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.748785 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-db-sync-config-data\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.786812 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.799451 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v856d\" (UniqueName: \"kubernetes.io/projected/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-kube-api-access-v856d\") pod \"barbican-db-sync-jrq4n\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.814910 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-pvrgf"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.816550 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.829632 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.829869 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.830236 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xc858" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.830586 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.831508 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lvscj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.835947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-config\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.836228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-combined-ca-bundle\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.839130 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw4xr\" (UniqueName: \"kubernetes.io/projected/e0eee50a-21e0-4948-9afa-b552d6173e3b-kube-api-access-gw4xr\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.838673 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bdqk9" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.838777 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.857895 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.866501 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nsp8c" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.867381 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.877243 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.899367 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pvrgf"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.921795 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-hpxs2"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.939038 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mgzjj"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.943135 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.944948 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb2004e-d936-4fb9-929b-b949158ac9b8-logs\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945010 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-config\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945107 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-scripts\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945232 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnd89\" (UniqueName: \"kubernetes.io/projected/787ad3bd-2593-42a7-b368-70abddcd74da-kube-api-access-rnd89\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945445 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-config-data\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945462 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945503 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-combined-ca-bundle\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945556 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-log-httpd\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-config-data\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945690 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945715 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-combined-ca-bundle\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945746 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-scripts\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945770 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw4xr\" (UniqueName: \"kubernetes.io/projected/e0eee50a-21e0-4948-9afa-b552d6173e3b-kube-api-access-gw4xr\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945795 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-run-httpd\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.945832 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw47p\" (UniqueName: \"kubernetes.io/projected/6eb2004e-d936-4fb9-929b-b949158ac9b8-kube-api-access-nw47p\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.951329 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.956084 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mgzjj"] Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.970851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-combined-ca-bundle\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.976322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw4xr\" (UniqueName: \"kubernetes.io/projected/e0eee50a-21e0-4948-9afa-b552d6173e3b-kube-api-access-gw4xr\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:01 crc kubenswrapper[4705]: I0124 08:01:01.979562 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-config\") pod \"neutron-db-sync-x9rjl\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.046775 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zwff5" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048097 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-scripts\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-run-httpd\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048183 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw47p\" (UniqueName: \"kubernetes.io/projected/6eb2004e-d936-4fb9-929b-b949158ac9b8-kube-api-access-nw47p\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048223 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048262 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb2004e-d936-4fb9-929b-b949158ac9b8-logs\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-config\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-scripts\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbggg\" (UniqueName: \"kubernetes.io/projected/cc339718-4e46-4f6a-b36b-efded67a561c-kube-api-access-bbggg\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048393 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnd89\" (UniqueName: \"kubernetes.io/projected/787ad3bd-2593-42a7-b368-70abddcd74da-kube-api-access-rnd89\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048421 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb2004e-d936-4fb9-929b-b949158ac9b8-logs\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.048445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-config-data\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.049708 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.049778 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-combined-ca-bundle\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.049847 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-log-httpd\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.049909 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-svc\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.049941 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.049993 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-config-data\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.050012 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.051476 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-log-httpd\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.051513 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-run-httpd\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.056369 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.083785 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-scripts\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.084127 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.084337 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-scripts\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.084971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-config-data\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.085752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.086783 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-combined-ca-bundle\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.088190 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-config-data\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.088204 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw47p\" (UniqueName: \"kubernetes.io/projected/6eb2004e-d936-4fb9-929b-b949158ac9b8-kube-api-access-nw47p\") pod \"placement-db-sync-pvrgf\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.090553 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnd89\" (UniqueName: \"kubernetes.io/projected/787ad3bd-2593-42a7-b368-70abddcd74da-kube-api-access-rnd89\") pod \"ceilometer-0\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.140384 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-85lsj"] Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.163544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-svc\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.163608 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.164799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.164953 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-svc\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.165080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-config\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.165219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbggg\" (UniqueName: \"kubernetes.io/projected/cc339718-4e46-4f6a-b36b-efded67a561c-kube-api-access-bbggg\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.165366 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.165793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.165218 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.165995 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-config\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.168062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.168796 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.185057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbggg\" (UniqueName: \"kubernetes.io/projected/cc339718-4e46-4f6a-b36b-efded67a561c-kube-api-access-bbggg\") pod \"dnsmasq-dns-cf78879c9-mgzjj\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:02 crc kubenswrapper[4705]: I0124 08:01:02.185881 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.283633 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.381017 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-hpxs2"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.411521 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.649835 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tdcf8"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.717203 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.844846 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-hpxs2" event={"ID":"72b7be3a-58c3-4968-9408-1ac1e11b5ebe","Type":"ContainerStarted","Data":"e7568c2113e61d7e47695a031f260d5130987cb55609efcbda17565ee607097a"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.848211 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.848157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-ln5nw" event={"ID":"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96","Type":"ContainerDied","Data":"15c80fdd35aa2ba693de891795415e68fd62d0940628f93d9565b3bbd1fade48"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.848340 4705 scope.go:117] "RemoveContainer" containerID="cbd81f0d5ec9e58a2d1fc37d8df6ebac06e938501bd0eff495a4d61ccdef7a38" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.856441 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85lsj" event={"ID":"1fac1504-770d-459e-ba4e-59970bc41413","Type":"ContainerStarted","Data":"9e5fc067f6521054c987e891000d286cfdab47d360347586c669d52bf398e08e"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.856530 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85lsj" event={"ID":"1fac1504-770d-459e-ba4e-59970bc41413","Type":"ContainerStarted","Data":"97c98b3d5a249f02592513f15bc0fe78f032ddb9b7a7a1333d197fece1ad7922"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.857493 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tdcf8" event={"ID":"e66c9c6e-cac9-4e99-b4d7-532f87f30ada","Type":"ContainerStarted","Data":"fe1f58e4b2c7752839a891627839854b848eaa81ff67813462ee13f7f4287db9"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.877005 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-85lsj" podStartSLOduration=2.876985157 podStartE2EDuration="2.876985157s" podCreationTimestamp="2026-01-24 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:02.873027386 +0000 UTC m=+1201.592900674" watchObservedRunningTime="2026-01-24 08:01:02.876985157 +0000 UTC m=+1201.596858445" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.884259 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-svc\") pod \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.884300 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-swift-storage-0\") pod \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.884377 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-sb\") pod \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.884493 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-kube-api-access-bzlds\") pod \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.884542 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-nb\") pod \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.884670 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-config\") pod \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\" (UID: \"7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96\") " Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.897750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-kube-api-access-bzlds" (OuterVolumeSpecName: "kube-api-access-bzlds") pod "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" (UID: "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96"). InnerVolumeSpecName "kube-api-access-bzlds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.935261 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" (UID: "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.939457 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" (UID: "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.946010 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" (UID: "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.946456 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" (UID: "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.951628 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-config" (OuterVolumeSpecName: "config") pod "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" (UID: "7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.988584 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.988617 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.988631 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.988647 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.988659 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-kube-api-access-bzlds\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:02.988671 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.217393 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-ln5nw"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.225612 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-ln5nw"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.491107 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lvscj"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.593619 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" path="/var/lib/kubelet/pods/7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96/volumes" Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.718552 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.796976 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x9rjl"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.811852 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pvrgf"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.842835 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jrq4n"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.854378 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mgzjj"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.864520 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.894453 4705 generic.go:334] "Generic (PLEG): container finished" podID="72b7be3a-58c3-4968-9408-1ac1e11b5ebe" containerID="4918e053e9fbbffd54df6c29b74a998944d5cadc0a14450a9bec6796d4e915d8" exitCode=0 Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.894539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-hpxs2" event={"ID":"72b7be3a-58c3-4968-9408-1ac1e11b5ebe","Type":"ContainerDied","Data":"4918e053e9fbbffd54df6c29b74a998944d5cadc0a14450a9bec6796d4e915d8"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.896045 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" event={"ID":"cc339718-4e46-4f6a-b36b-efded67a561c","Type":"ContainerStarted","Data":"f45e6e1731740cf3f068b151d886601e55c6a940c407caf0e8dce35141083aad"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.924606 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lvscj" event={"ID":"fd93ff70-0f51-4af8-9a10-6407f4901667","Type":"ContainerStarted","Data":"1c82d2210eacf7c42a27c9ae5482fe3de9a1b8b6f7c5e600d53f8afa472388a4"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.935744 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jrq4n" event={"ID":"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9","Type":"ContainerStarted","Data":"fb94e8eba020fd8e89d948fc981a5efc512350727877613ce7b060c2e7eb89aa"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.938001 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerStarted","Data":"81a4cb47f07ac46e19d2227548b6b6ca251e9d2e38d55da3a6d08b7c54e6b110"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.939733 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x9rjl" event={"ID":"e0eee50a-21e0-4948-9afa-b552d6173e3b","Type":"ContainerStarted","Data":"8121708f4fb1b834c3c414b0786b5c96dc49be218866ed4f6630d9f16e70cc25"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.943322 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlxm7" event={"ID":"d40a2abc-33c9-4284-af71-03fc828b92d2","Type":"ContainerStarted","Data":"d71d7328d5b349c8d9015fd098f29ffe7922d1d706f8fdd6285f156e63a8b4be"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.967462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pvrgf" event={"ID":"6eb2004e-d936-4fb9-929b-b949158ac9b8","Type":"ContainerStarted","Data":"4ddb0d398fad6a4040af9c45ea15cf94ba12f0628a4130fec25b8359b71f2bd1"} Jan 24 08:01:03 crc kubenswrapper[4705]: I0124 08:01:03.974721 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zlxm7" podStartSLOduration=4.714141184 podStartE2EDuration="41.974698133s" podCreationTimestamp="2026-01-24 08:00:22 +0000 UTC" firstStartedPulling="2026-01-24 08:00:23.920135171 +0000 UTC m=+1162.640008459" lastFinishedPulling="2026-01-24 08:01:01.18069212 +0000 UTC m=+1199.900565408" observedRunningTime="2026-01-24 08:01:03.968276804 +0000 UTC m=+1202.688150092" watchObservedRunningTime="2026-01-24 08:01:03.974698133 +0000 UTC m=+1202.694571421" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.264876 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.376397 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-swift-storage-0\") pod \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.376738 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-config\") pod \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.376911 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-nb\") pod \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.376941 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-svc\") pod \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.377022 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m786k\" (UniqueName: \"kubernetes.io/projected/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-kube-api-access-m786k\") pod \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.377125 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-sb\") pod \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\" (UID: \"72b7be3a-58c3-4968-9408-1ac1e11b5ebe\") " Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.393335 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-kube-api-access-m786k" (OuterVolumeSpecName: "kube-api-access-m786k") pod "72b7be3a-58c3-4968-9408-1ac1e11b5ebe" (UID: "72b7be3a-58c3-4968-9408-1ac1e11b5ebe"). InnerVolumeSpecName "kube-api-access-m786k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.407516 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72b7be3a-58c3-4968-9408-1ac1e11b5ebe" (UID: "72b7be3a-58c3-4968-9408-1ac1e11b5ebe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.419718 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-config" (OuterVolumeSpecName: "config") pod "72b7be3a-58c3-4968-9408-1ac1e11b5ebe" (UID: "72b7be3a-58c3-4968-9408-1ac1e11b5ebe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.432695 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "72b7be3a-58c3-4968-9408-1ac1e11b5ebe" (UID: "72b7be3a-58c3-4968-9408-1ac1e11b5ebe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.440409 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "72b7be3a-58c3-4968-9408-1ac1e11b5ebe" (UID: "72b7be3a-58c3-4968-9408-1ac1e11b5ebe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.480344 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.480390 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m786k\" (UniqueName: \"kubernetes.io/projected/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-kube-api-access-m786k\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.480406 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.480420 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.480432 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.481291 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72b7be3a-58c3-4968-9408-1ac1e11b5ebe" (UID: "72b7be3a-58c3-4968-9408-1ac1e11b5ebe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.582085 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72b7be3a-58c3-4968-9408-1ac1e11b5ebe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.990457 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-hpxs2" event={"ID":"72b7be3a-58c3-4968-9408-1ac1e11b5ebe","Type":"ContainerDied","Data":"e7568c2113e61d7e47695a031f260d5130987cb55609efcbda17565ee607097a"} Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.990789 4705 scope.go:117] "RemoveContainer" containerID="4918e053e9fbbffd54df6c29b74a998944d5cadc0a14450a9bec6796d4e915d8" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.993051 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-hpxs2" Jan 24 08:01:04 crc kubenswrapper[4705]: I0124 08:01:04.999420 4705 generic.go:334] "Generic (PLEG): container finished" podID="cc339718-4e46-4f6a-b36b-efded67a561c" containerID="526ec6189612107234019a3b343ed79f0dc242c063c39f7af15c5802bd7da53d" exitCode=0 Jan 24 08:01:05 crc kubenswrapper[4705]: I0124 08:01:05.000403 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" event={"ID":"cc339718-4e46-4f6a-b36b-efded67a561c","Type":"ContainerDied","Data":"526ec6189612107234019a3b343ed79f0dc242c063c39f7af15c5802bd7da53d"} Jan 24 08:01:05 crc kubenswrapper[4705]: I0124 08:01:05.007048 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x9rjl" event={"ID":"e0eee50a-21e0-4948-9afa-b552d6173e3b","Type":"ContainerStarted","Data":"fd943b09146631048436c719415acd1c7f286ac6c01c5fa0c29fffcb6ba8dcd0"} Jan 24 08:01:05 crc kubenswrapper[4705]: I0124 08:01:05.106182 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-x9rjl" podStartSLOduration=4.106163121 podStartE2EDuration="4.106163121s" podCreationTimestamp="2026-01-24 08:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:05.049869949 +0000 UTC m=+1203.769743237" watchObservedRunningTime="2026-01-24 08:01:05.106163121 +0000 UTC m=+1203.826036409" Jan 24 08:01:05 crc kubenswrapper[4705]: I0124 08:01:05.190604 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-hpxs2"] Jan 24 08:01:05 crc kubenswrapper[4705]: I0124 08:01:05.249174 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-hpxs2"] Jan 24 08:01:05 crc kubenswrapper[4705]: I0124 08:01:05.594076 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b7be3a-58c3-4968-9408-1ac1e11b5ebe" path="/var/lib/kubelet/pods/72b7be3a-58c3-4968-9408-1ac1e11b5ebe/volumes" Jan 24 08:01:06 crc kubenswrapper[4705]: I0124 08:01:06.018657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" event={"ID":"cc339718-4e46-4f6a-b36b-efded67a561c","Type":"ContainerStarted","Data":"80ef3fb7d00c0c5530b1f57793d2316b7775efcc9fc035062478738f5b76c85a"} Jan 24 08:01:06 crc kubenswrapper[4705]: I0124 08:01:06.020095 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:06 crc kubenswrapper[4705]: I0124 08:01:06.045438 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" podStartSLOduration=5.04541463 podStartE2EDuration="5.04541463s" podCreationTimestamp="2026-01-24 08:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:06.040500962 +0000 UTC m=+1204.760374270" watchObservedRunningTime="2026-01-24 08:01:06.04541463 +0000 UTC m=+1204.765287918" Jan 24 08:01:07 crc kubenswrapper[4705]: I0124 08:01:07.073007 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:01:07 crc kubenswrapper[4705]: I0124 08:01:07.073406 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:01:07 crc kubenswrapper[4705]: I0124 08:01:07.073491 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:01:07 crc kubenswrapper[4705]: I0124 08:01:07.074422 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"853514deca6f38cd0a77ff6aa66eff5f7cb660b73f8271ebb43497a216af6f05"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:01:07 crc kubenswrapper[4705]: I0124 08:01:07.074499 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://853514deca6f38cd0a77ff6aa66eff5f7cb660b73f8271ebb43497a216af6f05" gracePeriod=600 Jan 24 08:01:08 crc kubenswrapper[4705]: I0124 08:01:08.060964 4705 generic.go:334] "Generic (PLEG): container finished" podID="1fac1504-770d-459e-ba4e-59970bc41413" containerID="9e5fc067f6521054c987e891000d286cfdab47d360347586c669d52bf398e08e" exitCode=0 Jan 24 08:01:08 crc kubenswrapper[4705]: I0124 08:01:08.061075 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85lsj" event={"ID":"1fac1504-770d-459e-ba4e-59970bc41413","Type":"ContainerDied","Data":"9e5fc067f6521054c987e891000d286cfdab47d360347586c669d52bf398e08e"} Jan 24 08:01:08 crc kubenswrapper[4705]: I0124 08:01:08.064697 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="853514deca6f38cd0a77ff6aa66eff5f7cb660b73f8271ebb43497a216af6f05" exitCode=0 Jan 24 08:01:08 crc kubenswrapper[4705]: I0124 08:01:08.064771 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"853514deca6f38cd0a77ff6aa66eff5f7cb660b73f8271ebb43497a216af6f05"} Jan 24 08:01:08 crc kubenswrapper[4705]: I0124 08:01:08.064813 4705 scope.go:117] "RemoveContainer" containerID="4157e341864c82a6048118572db8f63cf29b32c144fe523434dcc318955e439e" Jan 24 08:01:12 crc kubenswrapper[4705]: I0124 08:01:12.286350 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:01:12 crc kubenswrapper[4705]: I0124 08:01:12.347616 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-vckb7"] Jan 24 08:01:12 crc kubenswrapper[4705]: I0124 08:01:12.347940 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" containerID="cri-o://7d9cd277b55c747795c0bf203164ead2df49ba401605c4ca2f1919f3e864fb3f" gracePeriod=10 Jan 24 08:01:13 crc kubenswrapper[4705]: I0124 08:01:13.925340 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 24 08:01:14 crc kubenswrapper[4705]: I0124 08:01:14.116678 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1fa4568-6ba7-4897-9076-b1778b317348" containerID="7d9cd277b55c747795c0bf203164ead2df49ba401605c4ca2f1919f3e864fb3f" exitCode=0 Jan 24 08:01:14 crc kubenswrapper[4705]: I0124 08:01:14.116725 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" event={"ID":"e1fa4568-6ba7-4897-9076-b1778b317348","Type":"ContainerDied","Data":"7d9cd277b55c747795c0bf203164ead2df49ba401605c4ca2f1919f3e864fb3f"} Jan 24 08:01:18 crc kubenswrapper[4705]: I0124 08:01:18.925514 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 24 08:01:19 crc kubenswrapper[4705]: E0124 08:01:19.269756 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 24 08:01:19 crc kubenswrapper[4705]: E0124 08:01:19.269942 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw47p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-pvrgf_openstack(6eb2004e-d936-4fb9-929b-b949158ac9b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 08:01:19 crc kubenswrapper[4705]: E0124 08:01:19.271287 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-pvrgf" podUID="6eb2004e-d936-4fb9-929b-b949158ac9b8" Jan 24 08:01:20 crc kubenswrapper[4705]: E0124 08:01:20.168981 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-pvrgf" podUID="6eb2004e-d936-4fb9-929b-b949158ac9b8" Jan 24 08:01:23 crc kubenswrapper[4705]: I0124 08:01:23.925091 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 24 08:01:23 crc kubenswrapper[4705]: I0124 08:01:23.925999 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:01:24 crc kubenswrapper[4705]: E0124 08:01:24.868116 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 24 08:01:24 crc kubenswrapper[4705]: E0124 08:01:24.868637 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpddl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tdcf8_openstack(e66c9c6e-cac9-4e99-b4d7-532f87f30ada): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 08:01:24 crc kubenswrapper[4705]: E0124 08:01:24.869989 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-tdcf8" podUID="e66c9c6e-cac9-4e99-b4d7-532f87f30ada" Jan 24 08:01:25 crc kubenswrapper[4705]: E0124 08:01:25.227953 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-tdcf8" podUID="e66c9c6e-cac9-4e99-b4d7-532f87f30ada" Jan 24 08:01:26 crc kubenswrapper[4705]: I0124 08:01:26.237201 4705 generic.go:334] "Generic (PLEG): container finished" podID="d40a2abc-33c9-4284-af71-03fc828b92d2" containerID="d71d7328d5b349c8d9015fd098f29ffe7922d1d706f8fdd6285f156e63a8b4be" exitCode=0 Jan 24 08:01:26 crc kubenswrapper[4705]: I0124 08:01:26.237298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlxm7" event={"ID":"d40a2abc-33c9-4284-af71-03fc828b92d2","Type":"ContainerDied","Data":"d71d7328d5b349c8d9015fd098f29ffe7922d1d706f8fdd6285f156e63a8b4be"} Jan 24 08:01:28 crc kubenswrapper[4705]: I0124 08:01:28.925218 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 24 08:01:33 crc kubenswrapper[4705]: I0124 08:01:33.925641 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.274493 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.332851 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-combined-ca-bundle\") pod \"1fac1504-770d-459e-ba4e-59970bc41413\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.333003 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-scripts\") pod \"1fac1504-770d-459e-ba4e-59970bc41413\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.333069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-config-data\") pod \"1fac1504-770d-459e-ba4e-59970bc41413\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.333418 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/1fac1504-770d-459e-ba4e-59970bc41413-kube-api-access-5gsnq\") pod \"1fac1504-770d-459e-ba4e-59970bc41413\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.333551 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-fernet-keys\") pod \"1fac1504-770d-459e-ba4e-59970bc41413\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.333619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-credential-keys\") pod \"1fac1504-770d-459e-ba4e-59970bc41413\" (UID: \"1fac1504-770d-459e-ba4e-59970bc41413\") " Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.341938 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fac1504-770d-459e-ba4e-59970bc41413-kube-api-access-5gsnq" (OuterVolumeSpecName: "kube-api-access-5gsnq") pod "1fac1504-770d-459e-ba4e-59970bc41413" (UID: "1fac1504-770d-459e-ba4e-59970bc41413"). InnerVolumeSpecName "kube-api-access-5gsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.342083 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-scripts" (OuterVolumeSpecName: "scripts") pod "1fac1504-770d-459e-ba4e-59970bc41413" (UID: "1fac1504-770d-459e-ba4e-59970bc41413"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.345995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1fac1504-770d-459e-ba4e-59970bc41413" (UID: "1fac1504-770d-459e-ba4e-59970bc41413"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.369031 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1fac1504-770d-459e-ba4e-59970bc41413" (UID: "1fac1504-770d-459e-ba4e-59970bc41413"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.371508 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fac1504-770d-459e-ba4e-59970bc41413" (UID: "1fac1504-770d-459e-ba4e-59970bc41413"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.374478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-config-data" (OuterVolumeSpecName: "config-data") pod "1fac1504-770d-459e-ba4e-59970bc41413" (UID: "1fac1504-770d-459e-ba4e-59970bc41413"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.437947 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/1fac1504-770d-459e-ba4e-59970bc41413-kube-api-access-5gsnq\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.438015 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.438033 4705 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.438052 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.438067 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.438080 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fac1504-770d-459e-ba4e-59970bc41413-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.495024 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85lsj" event={"ID":"1fac1504-770d-459e-ba4e-59970bc41413","Type":"ContainerDied","Data":"97c98b3d5a249f02592513f15bc0fe78f032ddb9b7a7a1333d197fece1ad7922"} Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.495666 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c98b3d5a249f02592513f15bc0fe78f032ddb9b7a7a1333d197fece1ad7922" Jan 24 08:01:34 crc kubenswrapper[4705]: I0124 08:01:34.495105 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85lsj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.365241 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-85lsj"] Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.372626 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-85lsj"] Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.487148 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f8ghj"] Jan 24 08:01:35 crc kubenswrapper[4705]: E0124 08:01:35.487851 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" containerName="init" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.487874 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" containerName="init" Jan 24 08:01:35 crc kubenswrapper[4705]: E0124 08:01:35.487896 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b7be3a-58c3-4968-9408-1ac1e11b5ebe" containerName="init" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.487905 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b7be3a-58c3-4968-9408-1ac1e11b5ebe" containerName="init" Jan 24 08:01:35 crc kubenswrapper[4705]: E0124 08:01:35.487936 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fac1504-770d-459e-ba4e-59970bc41413" containerName="keystone-bootstrap" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.487944 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fac1504-770d-459e-ba4e-59970bc41413" containerName="keystone-bootstrap" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.488169 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0c00cf-d44d-4731-a1ee-1a01d9d0eb96" containerName="init" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.488192 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fac1504-770d-459e-ba4e-59970bc41413" containerName="keystone-bootstrap" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.488206 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b7be3a-58c3-4968-9408-1ac1e11b5ebe" containerName="init" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.489152 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.492086 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gs7v" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.492515 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.495740 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.496312 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f8ghj"] Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.496795 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.497422 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.559638 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-config-data\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.559692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nclnh\" (UniqueName: \"kubernetes.io/projected/5a444b75-4995-40f9-8432-b62814685b02-kube-api-access-nclnh\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.559736 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-credential-keys\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.559757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-fernet-keys\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.559804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-scripts\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.559855 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-combined-ca-bundle\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.587998 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fac1504-770d-459e-ba4e-59970bc41413" path="/var/lib/kubelet/pods/1fac1504-770d-459e-ba4e-59970bc41413/volumes" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.662313 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-scripts\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.662607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-combined-ca-bundle\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.663000 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-config-data\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.663058 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nclnh\" (UniqueName: \"kubernetes.io/projected/5a444b75-4995-40f9-8432-b62814685b02-kube-api-access-nclnh\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.663464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-credential-keys\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.663504 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-fernet-keys\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.670752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-scripts\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.670921 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-credential-keys\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.671134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-combined-ca-bundle\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.671699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-config-data\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.682555 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-fernet-keys\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.696887 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nclnh\" (UniqueName: \"kubernetes.io/projected/5a444b75-4995-40f9-8432-b62814685b02-kube-api-access-nclnh\") pod \"keystone-bootstrap-f8ghj\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:35 crc kubenswrapper[4705]: I0124 08:01:35.814123 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.382037 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlxm7" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.423250 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg94x\" (UniqueName: \"kubernetes.io/projected/d40a2abc-33c9-4284-af71-03fc828b92d2-kube-api-access-tg94x\") pod \"d40a2abc-33c9-4284-af71-03fc828b92d2\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.423305 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-db-sync-config-data\") pod \"d40a2abc-33c9-4284-af71-03fc828b92d2\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.423352 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-config-data\") pod \"d40a2abc-33c9-4284-af71-03fc828b92d2\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.423406 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-combined-ca-bundle\") pod \"d40a2abc-33c9-4284-af71-03fc828b92d2\" (UID: \"d40a2abc-33c9-4284-af71-03fc828b92d2\") " Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.431444 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40a2abc-33c9-4284-af71-03fc828b92d2-kube-api-access-tg94x" (OuterVolumeSpecName: "kube-api-access-tg94x") pod "d40a2abc-33c9-4284-af71-03fc828b92d2" (UID: "d40a2abc-33c9-4284-af71-03fc828b92d2"). InnerVolumeSpecName "kube-api-access-tg94x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.445363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d40a2abc-33c9-4284-af71-03fc828b92d2" (UID: "d40a2abc-33c9-4284-af71-03fc828b92d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.460701 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d40a2abc-33c9-4284-af71-03fc828b92d2" (UID: "d40a2abc-33c9-4284-af71-03fc828b92d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.492779 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-config-data" (OuterVolumeSpecName: "config-data") pod "d40a2abc-33c9-4284-af71-03fc828b92d2" (UID: "d40a2abc-33c9-4284-af71-03fc828b92d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.526515 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg94x\" (UniqueName: \"kubernetes.io/projected/d40a2abc-33c9-4284-af71-03fc828b92d2-kube-api-access-tg94x\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.526553 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.526563 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.526576 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a2abc-33c9-4284-af71-03fc828b92d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.532373 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlxm7" event={"ID":"d40a2abc-33c9-4284-af71-03fc828b92d2","Type":"ContainerDied","Data":"60f6babc4b4667f834bdc18cfd3fe12fa2d150fdc3802b5c989880e3d23e925f"} Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.532419 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f6babc4b4667f834bdc18cfd3fe12fa2d150fdc3802b5c989880e3d23e925f" Jan 24 08:01:37 crc kubenswrapper[4705]: I0124 08:01:37.532499 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlxm7" Jan 24 08:01:38 crc kubenswrapper[4705]: E0124 08:01:38.533481 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 24 08:01:38 crc kubenswrapper[4705]: E0124 08:01:38.534099 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s2n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-lvscj_openstack(fd93ff70-0f51-4af8-9a10-6407f4901667): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 08:01:38 crc kubenswrapper[4705]: E0124 08:01:38.536153 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-lvscj" podUID="fd93ff70-0f51-4af8-9a10-6407f4901667" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.047262 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dz5x6"] Jan 24 08:01:39 crc kubenswrapper[4705]: E0124 08:01:39.047975 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40a2abc-33c9-4284-af71-03fc828b92d2" containerName="glance-db-sync" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.047995 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40a2abc-33c9-4284-af71-03fc828b92d2" containerName="glance-db-sync" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.048225 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40a2abc-33c9-4284-af71-03fc828b92d2" containerName="glance-db-sync" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.049566 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.078953 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dz5x6"] Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.232119 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25xkq\" (UniqueName: \"kubernetes.io/projected/385af144-eae2-4843-804d-cd7c42a6b833-kube-api-access-25xkq\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.232251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.232308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-config\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.232426 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.232556 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.232749 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.334740 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.334855 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25xkq\" (UniqueName: \"kubernetes.io/projected/385af144-eae2-4843-804d-cd7c42a6b833-kube-api-access-25xkq\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.334928 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.335004 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-config\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.335112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.335184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.336223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.336367 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.336540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-config\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.336579 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.337249 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.366211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25xkq\" (UniqueName: \"kubernetes.io/projected/385af144-eae2-4843-804d-cd7c42a6b833-kube-api-access-25xkq\") pod \"dnsmasq-dns-56df8fb6b7-dz5x6\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.384187 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:39 crc kubenswrapper[4705]: E0124 08:01:39.554980 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-lvscj" podUID="fd93ff70-0f51-4af8-9a10-6407f4901667" Jan 24 08:01:39 crc kubenswrapper[4705]: E0124 08:01:39.581117 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 24 08:01:39 crc kubenswrapper[4705]: E0124 08:01:39.581500 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n85h588h6bhddhf5h674h64ch596h6ch54ch68bh5f4h554h687h546h578hc7h98hf6h59fh545h5d4h596h68ch64dh688h55bh9chd7h68hf5hcbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnd89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(787ad3bd-2593-42a7-b368-70abddcd74da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.851192 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.854555 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.857696 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.858287 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ggqbv" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.858597 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.869720 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.912492 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.950893 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.950968 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.951023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.951061 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-logs\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.951090 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.951144 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndq6j\" (UniqueName: \"kubernetes.io/projected/e9ce9412-740d-4e20-8af5-58ebfc498c94-kube-api-access-ndq6j\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:39 crc kubenswrapper[4705]: I0124 08:01:39.951171 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.057159 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddwcz\" (UniqueName: \"kubernetes.io/projected/e1fa4568-6ba7-4897-9076-b1778b317348-kube-api-access-ddwcz\") pod \"e1fa4568-6ba7-4897-9076-b1778b317348\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.057694 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-sb\") pod \"e1fa4568-6ba7-4897-9076-b1778b317348\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.059029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-dns-svc\") pod \"e1fa4568-6ba7-4897-9076-b1778b317348\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.059103 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-config\") pod \"e1fa4568-6ba7-4897-9076-b1778b317348\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.059171 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-nb\") pod \"e1fa4568-6ba7-4897-9076-b1778b317348\" (UID: \"e1fa4568-6ba7-4897-9076-b1778b317348\") " Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.060037 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.060152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.060357 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.060659 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-logs\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.060707 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.060887 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndq6j\" (UniqueName: \"kubernetes.io/projected/e9ce9412-740d-4e20-8af5-58ebfc498c94-kube-api-access-ndq6j\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.060961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.061734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.062739 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-logs\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.062839 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.070974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.075738 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fa4568-6ba7-4897-9076-b1778b317348-kube-api-access-ddwcz" (OuterVolumeSpecName: "kube-api-access-ddwcz") pod "e1fa4568-6ba7-4897-9076-b1778b317348" (UID: "e1fa4568-6ba7-4897-9076-b1778b317348"). InnerVolumeSpecName "kube-api-access-ddwcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.076444 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.085608 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.096213 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndq6j\" (UniqueName: \"kubernetes.io/projected/e9ce9412-740d-4e20-8af5-58ebfc498c94-kube-api-access-ndq6j\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.174129 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddwcz\" (UniqueName: \"kubernetes.io/projected/e1fa4568-6ba7-4897-9076-b1778b317348-kube-api-access-ddwcz\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.194330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.262707 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:40 crc kubenswrapper[4705]: E0124 08:01:40.265318 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="init" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.265483 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="init" Jan 24 08:01:40 crc kubenswrapper[4705]: E0124 08:01:40.265589 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.265665 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.266029 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.268880 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.275742 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.286183 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.286475 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.320453 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f8ghj"] Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.335331 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1fa4568-6ba7-4897-9076-b1778b317348" (UID: "e1fa4568-6ba7-4897-9076-b1778b317348"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.347476 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-config" (OuterVolumeSpecName: "config") pod "e1fa4568-6ba7-4897-9076-b1778b317348" (UID: "e1fa4568-6ba7-4897-9076-b1778b317348"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.355267 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1fa4568-6ba7-4897-9076-b1778b317348" (UID: "e1fa4568-6ba7-4897-9076-b1778b317348"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.356679 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1fa4568-6ba7-4897-9076-b1778b317348" (UID: "e1fa4568-6ba7-4897-9076-b1778b317348"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.381412 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.382374 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.382552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j6f8\" (UniqueName: \"kubernetes.io/projected/36cb3f13-e839-473d-a14e-d4bd4512d00d-kube-api-access-7j6f8\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.382716 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.382847 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.383006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-logs\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.383118 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.383566 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.383770 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.383888 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.383959 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa4568-6ba7-4897-9076-b1778b317348-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.458010 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dz5x6"] Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.536842 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j6f8\" (UniqueName: \"kubernetes.io/projected/36cb3f13-e839-473d-a14e-d4bd4512d00d-kube-api-access-7j6f8\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.536899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.536924 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.536960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-logs\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.536980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.537068 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.537090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.537502 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.539479 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-logs\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.544440 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.548576 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.550123 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.556054 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.568909 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j6f8\" (UniqueName: \"kubernetes.io/projected/36cb3f13-e839-473d-a14e-d4bd4512d00d-kube-api-access-7j6f8\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.574281 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tdcf8" event={"ID":"e66c9c6e-cac9-4e99-b4d7-532f87f30ada","Type":"ContainerStarted","Data":"af1891f6e9ec6e86a3f010a149959b79147b36c8417b8953f50a9e9358d38436"} Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.597226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pvrgf" event={"ID":"6eb2004e-d936-4fb9-929b-b949158ac9b8","Type":"ContainerStarted","Data":"059112f9c3818b8fe52b2126ef42893a6b452be271822904bf04a1268c4cb111"} Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.618418 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" event={"ID":"385af144-eae2-4843-804d-cd7c42a6b833","Type":"ContainerStarted","Data":"1243285316e32dcf32a5bd72fe139d61cca6b1159df74b402282ad11c4fbafd5"} Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.633398 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8ghj" event={"ID":"5a444b75-4995-40f9-8432-b62814685b02","Type":"ContainerStarted","Data":"8519f0ed3fa037009edc28e2e04a47e02ed5ca7e8b44505e7316fda57297586d"} Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.634480 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.651348 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jrq4n" event={"ID":"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9","Type":"ContainerStarted","Data":"0777ce2c4bf566e52505eac9641d5f15c457353ae9c295220ba2953dca4c36de"} Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.658156 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" event={"ID":"e1fa4568-6ba7-4897-9076-b1778b317348","Type":"ContainerDied","Data":"8834781682d635ae73b2b725eb02d733a05f372e571ddcb838d4af1d91fe2a2b"} Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.658417 4705 scope.go:117] "RemoveContainer" containerID="7d9cd277b55c747795c0bf203164ead2df49ba401605c4ca2f1919f3e864fb3f" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.658690 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.677497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"775d2ed06dfb0c63c8b346ce0d06f95edde444c60e383f78b6d4ebabd731e08f"} Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.693001 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-tdcf8" podStartSLOduration=2.539535357 podStartE2EDuration="39.69296902s" podCreationTimestamp="2026-01-24 08:01:01 +0000 UTC" firstStartedPulling="2026-01-24 08:01:02.678623666 +0000 UTC m=+1201.398496954" lastFinishedPulling="2026-01-24 08:01:39.832057329 +0000 UTC m=+1238.551930617" observedRunningTime="2026-01-24 08:01:40.62388031 +0000 UTC m=+1239.343753618" watchObservedRunningTime="2026-01-24 08:01:40.69296902 +0000 UTC m=+1239.412842298" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.701102 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-pvrgf" podStartSLOduration=3.767362537 podStartE2EDuration="39.701056026s" podCreationTimestamp="2026-01-24 08:01:01 +0000 UTC" firstStartedPulling="2026-01-24 08:01:03.825576877 +0000 UTC m=+1202.545450165" lastFinishedPulling="2026-01-24 08:01:39.759270376 +0000 UTC m=+1238.479143654" observedRunningTime="2026-01-24 08:01:40.652431997 +0000 UTC m=+1239.372305285" watchObservedRunningTime="2026-01-24 08:01:40.701056026 +0000 UTC m=+1239.420929314" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.762995 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jrq4n" podStartSLOduration=3.964284709 podStartE2EDuration="39.762955655s" podCreationTimestamp="2026-01-24 08:01:01 +0000 UTC" firstStartedPulling="2026-01-24 08:01:03.819335493 +0000 UTC m=+1202.539208781" lastFinishedPulling="2026-01-24 08:01:39.618006439 +0000 UTC m=+1238.337879727" observedRunningTime="2026-01-24 08:01:40.68080686 +0000 UTC m=+1239.400680148" watchObservedRunningTime="2026-01-24 08:01:40.762955655 +0000 UTC m=+1239.482828943" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.825020 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-vckb7"] Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.830073 4705 scope.go:117] "RemoveContainer" containerID="bff9c576de78c517853d33fba2abec2766aaeb984a42555cde3faa1f32370a73" Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.834072 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-vckb7"] Jan 24 08:01:40 crc kubenswrapper[4705]: I0124 08:01:40.912148 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.202693 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.591176 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" path="/var/lib/kubelet/pods/e1fa4568-6ba7-4897-9076-b1778b317348/volumes" Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.675004 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.699993 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8ghj" event={"ID":"5a444b75-4995-40f9-8432-b62814685b02","Type":"ContainerStarted","Data":"61561a32e954dfb4dce0a870ed165eece13279f26a9aec75ff9695cbacfca56a"} Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.706838 4705 generic.go:334] "Generic (PLEG): container finished" podID="e0eee50a-21e0-4948-9afa-b552d6173e3b" containerID="fd943b09146631048436c719415acd1c7f286ac6c01c5fa0c29fffcb6ba8dcd0" exitCode=0 Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.708455 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x9rjl" event={"ID":"e0eee50a-21e0-4948-9afa-b552d6173e3b","Type":"ContainerDied","Data":"fd943b09146631048436c719415acd1c7f286ac6c01c5fa0c29fffcb6ba8dcd0"} Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.716217 4705 generic.go:334] "Generic (PLEG): container finished" podID="385af144-eae2-4843-804d-cd7c42a6b833" containerID="df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea" exitCode=0 Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.716335 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" event={"ID":"385af144-eae2-4843-804d-cd7c42a6b833","Type":"ContainerDied","Data":"df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea"} Jan 24 08:01:41 crc kubenswrapper[4705]: I0124 08:01:41.769033 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f8ghj" podStartSLOduration=6.768991869 podStartE2EDuration="6.768991869s" podCreationTimestamp="2026-01-24 08:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:41.749461453 +0000 UTC m=+1240.469334741" watchObservedRunningTime="2026-01-24 08:01:41.768991869 +0000 UTC m=+1240.488865157" Jan 24 08:01:42 crc kubenswrapper[4705]: W0124 08:01:42.190228 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cb3f13_e839_473d_a14e_d4bd4512d00d.slice/crio-4bf5460d667925d8218b6a083959357be812d72e325d969e401860c99883710a WatchSource:0}: Error finding container 4bf5460d667925d8218b6a083959357be812d72e325d969e401860c99883710a: Status 404 returned error can't find the container with id 4bf5460d667925d8218b6a083959357be812d72e325d969e401860c99883710a Jan 24 08:01:42 crc kubenswrapper[4705]: W0124 08:01:42.199179 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ce9412_740d_4e20_8af5_58ebfc498c94.slice/crio-0725176db28d7ae9f9e7471957422f39b61b63a8fa80ba961004880b344247ed WatchSource:0}: Error finding container 0725176db28d7ae9f9e7471957422f39b61b63a8fa80ba961004880b344247ed: Status 404 returned error can't find the container with id 0725176db28d7ae9f9e7471957422f39b61b63a8fa80ba961004880b344247ed Jan 24 08:01:42 crc kubenswrapper[4705]: I0124 08:01:42.764091 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36cb3f13-e839-473d-a14e-d4bd4512d00d","Type":"ContainerStarted","Data":"4bf5460d667925d8218b6a083959357be812d72e325d969e401860c99883710a"} Jan 24 08:01:42 crc kubenswrapper[4705]: I0124 08:01:42.771170 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9ce9412-740d-4e20-8af5-58ebfc498c94","Type":"ContainerStarted","Data":"0725176db28d7ae9f9e7471957422f39b61b63a8fa80ba961004880b344247ed"} Jan 24 08:01:42 crc kubenswrapper[4705]: I0124 08:01:42.780377 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerStarted","Data":"29b1229f77bf2b10062994cd722d4cec0626c16f125b04fe450268fefe2563f9"} Jan 24 08:01:42 crc kubenswrapper[4705]: I0124 08:01:42.784266 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" event={"ID":"385af144-eae2-4843-804d-cd7c42a6b833","Type":"ContainerStarted","Data":"f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33"} Jan 24 08:01:42 crc kubenswrapper[4705]: I0124 08:01:42.784415 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:42 crc kubenswrapper[4705]: I0124 08:01:42.812491 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" podStartSLOduration=4.812475129 podStartE2EDuration="4.812475129s" podCreationTimestamp="2026-01-24 08:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:42.81105384 +0000 UTC m=+1241.530927148" watchObservedRunningTime="2026-01-24 08:01:42.812475129 +0000 UTC m=+1241.532348417" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.038226 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.120767 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.496213 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.559512 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-combined-ca-bundle\") pod \"e0eee50a-21e0-4948-9afa-b552d6173e3b\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.559640 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw4xr\" (UniqueName: \"kubernetes.io/projected/e0eee50a-21e0-4948-9afa-b552d6173e3b-kube-api-access-gw4xr\") pod \"e0eee50a-21e0-4948-9afa-b552d6173e3b\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.559927 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-config\") pod \"e0eee50a-21e0-4948-9afa-b552d6173e3b\" (UID: \"e0eee50a-21e0-4948-9afa-b552d6173e3b\") " Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.579314 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0eee50a-21e0-4948-9afa-b552d6173e3b-kube-api-access-gw4xr" (OuterVolumeSpecName: "kube-api-access-gw4xr") pod "e0eee50a-21e0-4948-9afa-b552d6173e3b" (UID: "e0eee50a-21e0-4948-9afa-b552d6173e3b"). InnerVolumeSpecName "kube-api-access-gw4xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.607520 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-config" (OuterVolumeSpecName: "config") pod "e0eee50a-21e0-4948-9afa-b552d6173e3b" (UID: "e0eee50a-21e0-4948-9afa-b552d6173e3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.618300 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0eee50a-21e0-4948-9afa-b552d6173e3b" (UID: "e0eee50a-21e0-4948-9afa-b552d6173e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.661775 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw4xr\" (UniqueName: \"kubernetes.io/projected/e0eee50a-21e0-4948-9afa-b552d6173e3b-kube-api-access-gw4xr\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.661815 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.661842 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0eee50a-21e0-4948-9afa-b552d6173e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.803940 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36cb3f13-e839-473d-a14e-d4bd4512d00d","Type":"ContainerStarted","Data":"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37"} Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.806364 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9ce9412-740d-4e20-8af5-58ebfc498c94","Type":"ContainerStarted","Data":"4f3eb92733f9b03af49fb39cfbb2debda278ec008867de3ba58349524e5c93db"} Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.808081 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x9rjl" event={"ID":"e0eee50a-21e0-4948-9afa-b552d6173e3b","Type":"ContainerDied","Data":"8121708f4fb1b834c3c414b0786b5c96dc49be218866ed4f6630d9f16e70cc25"} Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.808120 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8121708f4fb1b834c3c414b0786b5c96dc49be218866ed4f6630d9f16e70cc25" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.808121 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x9rjl" Jan 24 08:01:43 crc kubenswrapper[4705]: I0124 08:01:43.927811 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-vckb7" podUID="e1fa4568-6ba7-4897-9076-b1778b317348" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.177266 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dz5x6"] Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.246199 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-lcm9r"] Jan 24 08:01:44 crc kubenswrapper[4705]: E0124 08:01:44.247017 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0eee50a-21e0-4948-9afa-b552d6173e3b" containerName="neutron-db-sync" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.247036 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0eee50a-21e0-4948-9afa-b552d6173e3b" containerName="neutron-db-sync" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.247280 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0eee50a-21e0-4948-9afa-b552d6173e3b" containerName="neutron-db-sync" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.248525 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.273850 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77b96594b8-9jfp2"] Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.282740 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.282803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.282865 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.282919 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-config\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.282957 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.283009 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8nd\" (UniqueName: \"kubernetes.io/projected/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-kube-api-access-5r8nd\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.289453 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.294173 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.294410 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.294616 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.295521 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zwff5" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.383151 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77b96594b8-9jfp2"] Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.386339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9nx\" (UniqueName: \"kubernetes.io/projected/266a132c-d822-44b5-a75c-e359c65c78ea-kube-api-access-rj9nx\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.386417 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-config\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.386479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8nd\" (UniqueName: \"kubernetes.io/projected/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-kube-api-access-5r8nd\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.386595 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-httpd-config\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.386642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-ovndb-tls-certs\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.386725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.386771 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.388732 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.388970 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.389083 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.389122 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-combined-ca-bundle\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.389245 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.389294 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-config\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.390021 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-config\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.391227 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.398404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.438087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8nd\" (UniqueName: \"kubernetes.io/projected/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-kube-api-access-5r8nd\") pod \"dnsmasq-dns-6b7b667979-lcm9r\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.481415 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-lcm9r"] Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.494489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-httpd-config\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.494545 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-ovndb-tls-certs\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.494612 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-combined-ca-bundle\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.494669 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9nx\" (UniqueName: \"kubernetes.io/projected/266a132c-d822-44b5-a75c-e359c65c78ea-kube-api-access-rj9nx\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.494690 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-config\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.499699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-config\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.501812 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-httpd-config\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.501841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-combined-ca-bundle\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.523236 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9nx\" (UniqueName: \"kubernetes.io/projected/266a132c-d822-44b5-a75c-e359c65c78ea-kube-api-access-rj9nx\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.540928 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-ovndb-tls-certs\") pod \"neutron-77b96594b8-9jfp2\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.616584 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.643744 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.874035 4705 generic.go:334] "Generic (PLEG): container finished" podID="6eb2004e-d936-4fb9-929b-b949158ac9b8" containerID="059112f9c3818b8fe52b2126ef42893a6b452be271822904bf04a1268c4cb111" exitCode=0 Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.874512 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pvrgf" event={"ID":"6eb2004e-d936-4fb9-929b-b949158ac9b8","Type":"ContainerDied","Data":"059112f9c3818b8fe52b2126ef42893a6b452be271822904bf04a1268c4cb111"} Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.880007 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-log" containerID="cri-o://5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37" gracePeriod=30 Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.880842 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36cb3f13-e839-473d-a14e-d4bd4512d00d","Type":"ContainerStarted","Data":"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870"} Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.880919 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-httpd" containerID="cri-o://129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870" gracePeriod=30 Jan 24 08:01:44 crc kubenswrapper[4705]: I0124 08:01:44.955155 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.955130706 podStartE2EDuration="5.955130706s" podCreationTimestamp="2026-01-24 08:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:44.943682886 +0000 UTC m=+1243.663556184" watchObservedRunningTime="2026-01-24 08:01:44.955130706 +0000 UTC m=+1243.675004004" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.219224 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-lcm9r"] Jan 24 08:01:45 crc kubenswrapper[4705]: W0124 08:01:45.260623 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ee7d188_54dd_45e5_95f9_75b0bcef52d3.slice/crio-423cfb3fa77200ebe2b1ddfe6947fe6215d0d6aef96716447dce0974bd844638 WatchSource:0}: Error finding container 423cfb3fa77200ebe2b1ddfe6947fe6215d0d6aef96716447dce0974bd844638: Status 404 returned error can't find the container with id 423cfb3fa77200ebe2b1ddfe6947fe6215d0d6aef96716447dce0974bd844638 Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.545741 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77b96594b8-9jfp2"] Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.774253 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.826051 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-scripts\") pod \"36cb3f13-e839-473d-a14e-d4bd4512d00d\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.826113 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-combined-ca-bundle\") pod \"36cb3f13-e839-473d-a14e-d4bd4512d00d\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.826201 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-logs\") pod \"36cb3f13-e839-473d-a14e-d4bd4512d00d\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.826268 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-config-data\") pod \"36cb3f13-e839-473d-a14e-d4bd4512d00d\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.826391 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-httpd-run\") pod \"36cb3f13-e839-473d-a14e-d4bd4512d00d\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.826444 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"36cb3f13-e839-473d-a14e-d4bd4512d00d\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.826518 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j6f8\" (UniqueName: \"kubernetes.io/projected/36cb3f13-e839-473d-a14e-d4bd4512d00d-kube-api-access-7j6f8\") pod \"36cb3f13-e839-473d-a14e-d4bd4512d00d\" (UID: \"36cb3f13-e839-473d-a14e-d4bd4512d00d\") " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.830517 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-logs" (OuterVolumeSpecName: "logs") pod "36cb3f13-e839-473d-a14e-d4bd4512d00d" (UID: "36cb3f13-e839-473d-a14e-d4bd4512d00d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.830539 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "36cb3f13-e839-473d-a14e-d4bd4512d00d" (UID: "36cb3f13-e839-473d-a14e-d4bd4512d00d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.836162 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-scripts" (OuterVolumeSpecName: "scripts") pod "36cb3f13-e839-473d-a14e-d4bd4512d00d" (UID: "36cb3f13-e839-473d-a14e-d4bd4512d00d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.840985 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cb3f13-e839-473d-a14e-d4bd4512d00d-kube-api-access-7j6f8" (OuterVolumeSpecName: "kube-api-access-7j6f8") pod "36cb3f13-e839-473d-a14e-d4bd4512d00d" (UID: "36cb3f13-e839-473d-a14e-d4bd4512d00d"). InnerVolumeSpecName "kube-api-access-7j6f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.846077 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "36cb3f13-e839-473d-a14e-d4bd4512d00d" (UID: "36cb3f13-e839-473d-a14e-d4bd4512d00d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.889140 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36cb3f13-e839-473d-a14e-d4bd4512d00d" (UID: "36cb3f13-e839-473d-a14e-d4bd4512d00d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.892602 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9ce9412-740d-4e20-8af5-58ebfc498c94","Type":"ContainerStarted","Data":"13015c5180b7a5fe56d9cc75dd2c7cc86aa8463f2265b0623d4aed95278c3c44"} Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.892865 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-httpd" containerID="cri-o://13015c5180b7a5fe56d9cc75dd2c7cc86aa8463f2265b0623d4aed95278c3c44" gracePeriod=30 Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.892967 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-log" containerID="cri-o://4f3eb92733f9b03af49fb39cfbb2debda278ec008867de3ba58349524e5c93db" gracePeriod=30 Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.901276 4705 generic.go:334] "Generic (PLEG): container finished" podID="bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" containerID="0777ce2c4bf566e52505eac9641d5f15c457353ae9c295220ba2953dca4c36de" exitCode=0 Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.901388 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jrq4n" event={"ID":"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9","Type":"ContainerDied","Data":"0777ce2c4bf566e52505eac9641d5f15c457353ae9c295220ba2953dca4c36de"} Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.903133 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-config-data" (OuterVolumeSpecName: "config-data") pod "36cb3f13-e839-473d-a14e-d4bd4512d00d" (UID: "36cb3f13-e839-473d-a14e-d4bd4512d00d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.905321 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b96594b8-9jfp2" event={"ID":"266a132c-d822-44b5-a75c-e359c65c78ea","Type":"ContainerStarted","Data":"69f4ad74289397834f86675e68a1f3904003531ab17cd975b1755b37d9179b38"} Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.913214 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" event={"ID":"5ee7d188-54dd-45e5-95f9-75b0bcef52d3","Type":"ContainerStarted","Data":"423cfb3fa77200ebe2b1ddfe6947fe6215d0d6aef96716447dce0974bd844638"} Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923548 4705 generic.go:334] "Generic (PLEG): container finished" podID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerID="129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870" exitCode=0 Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923579 4705 generic.go:334] "Generic (PLEG): container finished" podID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerID="5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37" exitCode=143 Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36cb3f13-e839-473d-a14e-d4bd4512d00d","Type":"ContainerDied","Data":"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870"} Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923756 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36cb3f13-e839-473d-a14e-d4bd4512d00d","Type":"ContainerDied","Data":"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37"} Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923772 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36cb3f13-e839-473d-a14e-d4bd4512d00d","Type":"ContainerDied","Data":"4bf5460d667925d8218b6a083959357be812d72e325d969e401860c99883710a"} Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923792 4705 scope.go:117] "RemoveContainer" containerID="129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923802 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" podUID="385af144-eae2-4843-804d-cd7c42a6b833" containerName="dnsmasq-dns" containerID="cri-o://f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33" gracePeriod=10 Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.923997 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.935065 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.935109 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.935143 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.935158 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j6f8\" (UniqueName: \"kubernetes.io/projected/36cb3f13-e839-473d-a14e-d4bd4512d00d-kube-api-access-7j6f8\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.935171 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.935180 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cb3f13-e839-473d-a14e-d4bd4512d00d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.935190 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cb3f13-e839-473d-a14e-d4bd4512d00d-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.938273 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.938250721 podStartE2EDuration="7.938250721s" podCreationTimestamp="2026-01-24 08:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:45.934076244 +0000 UTC m=+1244.653949532" watchObservedRunningTime="2026-01-24 08:01:45.938250721 +0000 UTC m=+1244.658124009" Jan 24 08:01:45 crc kubenswrapper[4705]: I0124 08:01:45.991742 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.025916 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.039157 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.051173 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.062559 4705 scope.go:117] "RemoveContainer" containerID="5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.067017 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:46 crc kubenswrapper[4705]: E0124 08:01:46.067694 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-log" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.067708 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-log" Jan 24 08:01:46 crc kubenswrapper[4705]: E0124 08:01:46.067719 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-httpd" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.067726 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-httpd" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.067939 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-log" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.067956 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" containerName="glance-httpd" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.068986 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.075783 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.078604 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.080116 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145275 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145344 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145381 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145429 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145455 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145488 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145507 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgtg\" (UniqueName: \"kubernetes.io/projected/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-kube-api-access-4lgtg\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.145542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247356 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247376 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgtg\" (UniqueName: \"kubernetes.io/projected/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-kube-api-access-4lgtg\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247555 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247598 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.247631 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.250672 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.250691 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.257311 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.257790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.281651 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.283612 4705 scope.go:117] "RemoveContainer" containerID="129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870" Jan 24 08:01:46 crc kubenswrapper[4705]: E0124 08:01:46.288950 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870\": container with ID starting with 129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870 not found: ID does not exist" containerID="129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.289005 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870"} err="failed to get container status \"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870\": rpc error: code = NotFound desc = could not find container \"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870\": container with ID starting with 129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870 not found: ID does not exist" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.289034 4705 scope.go:117] "RemoveContainer" containerID="5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37" Jan 24 08:01:46 crc kubenswrapper[4705]: E0124 08:01:46.292904 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37\": container with ID starting with 5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37 not found: ID does not exist" containerID="5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.292937 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37"} err="failed to get container status \"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37\": rpc error: code = NotFound desc = could not find container \"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37\": container with ID starting with 5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37 not found: ID does not exist" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.292954 4705 scope.go:117] "RemoveContainer" containerID="129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.297485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.299372 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgtg\" (UniqueName: \"kubernetes.io/projected/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-kube-api-access-4lgtg\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.310014 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870"} err="failed to get container status \"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870\": rpc error: code = NotFound desc = could not find container \"129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870\": container with ID starting with 129e9dd81298b0bb96272783cff6e04df88acf6556f5e919162a7f10cdcb7870 not found: ID does not exist" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.310058 4705 scope.go:117] "RemoveContainer" containerID="5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.315005 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37"} err="failed to get container status \"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37\": rpc error: code = NotFound desc = could not find container \"5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37\": container with ID starting with 5fce07f2012099ed86e7a4cdb87a5e49d5edacce1b9bb128558c7ca6d6b5ec37 not found: ID does not exist" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.327796 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.566904 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.656154 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.784839 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-scripts\") pod \"6eb2004e-d936-4fb9-929b-b949158ac9b8\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.784974 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-combined-ca-bundle\") pod \"6eb2004e-d936-4fb9-929b-b949158ac9b8\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.785020 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw47p\" (UniqueName: \"kubernetes.io/projected/6eb2004e-d936-4fb9-929b-b949158ac9b8-kube-api-access-nw47p\") pod \"6eb2004e-d936-4fb9-929b-b949158ac9b8\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.785190 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb2004e-d936-4fb9-929b-b949158ac9b8-logs\") pod \"6eb2004e-d936-4fb9-929b-b949158ac9b8\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.785253 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-config-data\") pod \"6eb2004e-d936-4fb9-929b-b949158ac9b8\" (UID: \"6eb2004e-d936-4fb9-929b-b949158ac9b8\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.794196 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb2004e-d936-4fb9-929b-b949158ac9b8-logs" (OuterVolumeSpecName: "logs") pod "6eb2004e-d936-4fb9-929b-b949158ac9b8" (UID: "6eb2004e-d936-4fb9-929b-b949158ac9b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.815017 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-scripts" (OuterVolumeSpecName: "scripts") pod "6eb2004e-d936-4fb9-929b-b949158ac9b8" (UID: "6eb2004e-d936-4fb9-929b-b949158ac9b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.825931 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb2004e-d936-4fb9-929b-b949158ac9b8-kube-api-access-nw47p" (OuterVolumeSpecName: "kube-api-access-nw47p") pod "6eb2004e-d936-4fb9-929b-b949158ac9b8" (UID: "6eb2004e-d936-4fb9-929b-b949158ac9b8"). InnerVolumeSpecName "kube-api-access-nw47p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.863970 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-config-data" (OuterVolumeSpecName: "config-data") pod "6eb2004e-d936-4fb9-929b-b949158ac9b8" (UID: "6eb2004e-d936-4fb9-929b-b949158ac9b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.868328 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.875450 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.891754 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw47p\" (UniqueName: \"kubernetes.io/projected/6eb2004e-d936-4fb9-929b-b949158ac9b8-kube-api-access-nw47p\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.891784 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb2004e-d936-4fb9-929b-b949158ac9b8-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.891797 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.891811 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.896999 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eb2004e-d936-4fb9-929b-b949158ac9b8" (UID: "6eb2004e-d936-4fb9-929b-b949158ac9b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.982549 4705 generic.go:334] "Generic (PLEG): container finished" podID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerID="13015c5180b7a5fe56d9cc75dd2c7cc86aa8463f2265b0623d4aed95278c3c44" exitCode=0 Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.982609 4705 generic.go:334] "Generic (PLEG): container finished" podID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerID="4f3eb92733f9b03af49fb39cfbb2debda278ec008867de3ba58349524e5c93db" exitCode=143 Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.982641 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9ce9412-740d-4e20-8af5-58ebfc498c94","Type":"ContainerDied","Data":"13015c5180b7a5fe56d9cc75dd2c7cc86aa8463f2265b0623d4aed95278c3c44"} Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.982759 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9ce9412-740d-4e20-8af5-58ebfc498c94","Type":"ContainerDied","Data":"4f3eb92733f9b03af49fb39cfbb2debda278ec008867de3ba58349524e5c93db"} Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.993078 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-config\") pod \"385af144-eae2-4843-804d-cd7c42a6b833\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.993188 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-swift-storage-0\") pod \"385af144-eae2-4843-804d-cd7c42a6b833\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.993333 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-sb\") pod \"385af144-eae2-4843-804d-cd7c42a6b833\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.993386 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-nb\") pod \"385af144-eae2-4843-804d-cd7c42a6b833\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.993430 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25xkq\" (UniqueName: \"kubernetes.io/projected/385af144-eae2-4843-804d-cd7c42a6b833-kube-api-access-25xkq\") pod \"385af144-eae2-4843-804d-cd7c42a6b833\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.993464 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-svc\") pod \"385af144-eae2-4843-804d-cd7c42a6b833\" (UID: \"385af144-eae2-4843-804d-cd7c42a6b833\") " Jan 24 08:01:46 crc kubenswrapper[4705]: I0124 08:01:46.994154 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb2004e-d936-4fb9-929b-b949158ac9b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.010760 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385af144-eae2-4843-804d-cd7c42a6b833-kube-api-access-25xkq" (OuterVolumeSpecName: "kube-api-access-25xkq") pod "385af144-eae2-4843-804d-cd7c42a6b833" (UID: "385af144-eae2-4843-804d-cd7c42a6b833"). InnerVolumeSpecName "kube-api-access-25xkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.018237 4705 generic.go:334] "Generic (PLEG): container finished" podID="5a444b75-4995-40f9-8432-b62814685b02" containerID="61561a32e954dfb4dce0a870ed165eece13279f26a9aec75ff9695cbacfca56a" exitCode=0 Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.018336 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8ghj" event={"ID":"5a444b75-4995-40f9-8432-b62814685b02","Type":"ContainerDied","Data":"61561a32e954dfb4dce0a870ed165eece13279f26a9aec75ff9695cbacfca56a"} Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.029103 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b96594b8-9jfp2" event={"ID":"266a132c-d822-44b5-a75c-e359c65c78ea","Type":"ContainerStarted","Data":"46fb9222c6415345ec7a8700123b928768dcdad6f528b20811dbe265f3c214b2"} Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.029156 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b96594b8-9jfp2" event={"ID":"266a132c-d822-44b5-a75c-e359c65c78ea","Type":"ContainerStarted","Data":"dea2276ce0890634faad2f01c08bc3abb5132cd1425bd3cf9af51cd32d6af9a9"} Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.029925 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.042084 4705 generic.go:334] "Generic (PLEG): container finished" podID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerID="810d4d6088edda10ba93ccd3cc719cbe45a69480a7a447a5b2df5305da61fbc5" exitCode=0 Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.042197 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" event={"ID":"5ee7d188-54dd-45e5-95f9-75b0bcef52d3","Type":"ContainerDied","Data":"810d4d6088edda10ba93ccd3cc719cbe45a69480a7a447a5b2df5305da61fbc5"} Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.083436 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "385af144-eae2-4843-804d-cd7c42a6b833" (UID: "385af144-eae2-4843-804d-cd7c42a6b833"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.098162 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pvrgf" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.099393 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.099490 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25xkq\" (UniqueName: \"kubernetes.io/projected/385af144-eae2-4843-804d-cd7c42a6b833-kube-api-access-25xkq\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.099576 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pvrgf" event={"ID":"6eb2004e-d936-4fb9-929b-b949158ac9b8","Type":"ContainerDied","Data":"4ddb0d398fad6a4040af9c45ea15cf94ba12f0628a4130fec25b8359b71f2bd1"} Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.099654 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ddb0d398fad6a4040af9c45ea15cf94ba12f0628a4130fec25b8359b71f2bd1" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.132597 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.149979 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "385af144-eae2-4843-804d-cd7c42a6b833" (UID: "385af144-eae2-4843-804d-cd7c42a6b833"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.150593 4705 generic.go:334] "Generic (PLEG): container finished" podID="385af144-eae2-4843-804d-cd7c42a6b833" containerID="f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33" exitCode=0 Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.150668 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.150955 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" event={"ID":"385af144-eae2-4843-804d-cd7c42a6b833","Type":"ContainerDied","Data":"f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33"} Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.151046 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dz5x6" event={"ID":"385af144-eae2-4843-804d-cd7c42a6b833","Type":"ContainerDied","Data":"1243285316e32dcf32a5bd72fe139d61cca6b1159df74b402282ad11c4fbafd5"} Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.151073 4705 scope.go:117] "RemoveContainer" containerID="f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.157984 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-config" (OuterVolumeSpecName: "config") pod "385af144-eae2-4843-804d-cd7c42a6b833" (UID: "385af144-eae2-4843-804d-cd7c42a6b833"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.183733 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "385af144-eae2-4843-804d-cd7c42a6b833" (UID: "385af144-eae2-4843-804d-cd7c42a6b833"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.198559 4705 scope.go:117] "RemoveContainer" containerID="df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.205415 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.205450 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.205465 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.218738 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9b94fb6bf-sxxmr"] Jan 24 08:01:47 crc kubenswrapper[4705]: E0124 08:01:47.219418 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-httpd" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219437 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-httpd" Jan 24 08:01:47 crc kubenswrapper[4705]: E0124 08:01:47.219458 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb2004e-d936-4fb9-929b-b949158ac9b8" containerName="placement-db-sync" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219466 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb2004e-d936-4fb9-929b-b949158ac9b8" containerName="placement-db-sync" Jan 24 08:01:47 crc kubenswrapper[4705]: E0124 08:01:47.219475 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385af144-eae2-4843-804d-cd7c42a6b833" containerName="init" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219483 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="385af144-eae2-4843-804d-cd7c42a6b833" containerName="init" Jan 24 08:01:47 crc kubenswrapper[4705]: E0124 08:01:47.219513 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-log" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219520 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-log" Jan 24 08:01:47 crc kubenswrapper[4705]: E0124 08:01:47.219531 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385af144-eae2-4843-804d-cd7c42a6b833" containerName="dnsmasq-dns" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219538 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="385af144-eae2-4843-804d-cd7c42a6b833" containerName="dnsmasq-dns" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219773 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb2004e-d936-4fb9-929b-b949158ac9b8" containerName="placement-db-sync" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219787 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="385af144-eae2-4843-804d-cd7c42a6b833" containerName="dnsmasq-dns" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219802 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-httpd" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.219814 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" containerName="glance-log" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.221985 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.231955 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77b96594b8-9jfp2" podStartSLOduration=3.231927101 podStartE2EDuration="3.231927101s" podCreationTimestamp="2026-01-24 08:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:47.12380217 +0000 UTC m=+1245.843675458" watchObservedRunningTime="2026-01-24 08:01:47.231927101 +0000 UTC m=+1245.951800389" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.265348 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.265617 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.277516 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55877bd6d-swpx2"] Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.279424 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "385af144-eae2-4843-804d-cd7c42a6b833" (UID: "385af144-eae2-4843-804d-cd7c42a6b833"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.280583 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.286560 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.287661 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.288717 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.288694 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.297527 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bdqk9" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.306729 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-config-data\") pod \"e9ce9412-740d-4e20-8af5-58ebfc498c94\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.307301 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-logs\") pod \"e9ce9412-740d-4e20-8af5-58ebfc498c94\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.307368 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-combined-ca-bundle\") pod \"e9ce9412-740d-4e20-8af5-58ebfc498c94\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.307460 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndq6j\" (UniqueName: \"kubernetes.io/projected/e9ce9412-740d-4e20-8af5-58ebfc498c94-kube-api-access-ndq6j\") pod \"e9ce9412-740d-4e20-8af5-58ebfc498c94\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.307491 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"e9ce9412-740d-4e20-8af5-58ebfc498c94\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.307611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-httpd-run\") pod \"e9ce9412-740d-4e20-8af5-58ebfc498c94\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.307655 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-scripts\") pod \"e9ce9412-740d-4e20-8af5-58ebfc498c94\" (UID: \"e9ce9412-740d-4e20-8af5-58ebfc498c94\") " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308145 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-ovndb-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-internal-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308221 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-httpd-config\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-combined-ca-bundle\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9nkg\" (UniqueName: \"kubernetes.io/projected/688d82f6-748b-42fa-8595-b24a65ba77d3-kube-api-access-p9nkg\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308328 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-public-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-config\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.308461 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/385af144-eae2-4843-804d-cd7c42a6b833-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.310980 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e9ce9412-740d-4e20-8af5-58ebfc498c94" (UID: "e9ce9412-740d-4e20-8af5-58ebfc498c94"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.318710 4705 scope.go:117] "RemoveContainer" containerID="f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.320830 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-logs" (OuterVolumeSpecName: "logs") pod "e9ce9412-740d-4e20-8af5-58ebfc498c94" (UID: "e9ce9412-740d-4e20-8af5-58ebfc498c94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.322390 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b94fb6bf-sxxmr"] Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.323329 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ce9412-740d-4e20-8af5-58ebfc498c94-kube-api-access-ndq6j" (OuterVolumeSpecName: "kube-api-access-ndq6j") pod "e9ce9412-740d-4e20-8af5-58ebfc498c94" (UID: "e9ce9412-740d-4e20-8af5-58ebfc498c94"). InnerVolumeSpecName "kube-api-access-ndq6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.327606 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "e9ce9412-740d-4e20-8af5-58ebfc498c94" (UID: "e9ce9412-740d-4e20-8af5-58ebfc498c94"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: E0124 08:01:47.327813 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33\": container with ID starting with f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33 not found: ID does not exist" containerID="f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.329957 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33"} err="failed to get container status \"f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33\": rpc error: code = NotFound desc = could not find container \"f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33\": container with ID starting with f2acf8383febf1af3d46d448dbdb8078c43de49bdad72ce41b31cedce0394b33 not found: ID does not exist" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.329990 4705 scope.go:117] "RemoveContainer" containerID="df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea" Jan 24 08:01:47 crc kubenswrapper[4705]: E0124 08:01:47.334751 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea\": container with ID starting with df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea not found: ID does not exist" containerID="df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.335220 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea"} err="failed to get container status \"df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea\": rpc error: code = NotFound desc = could not find container \"df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea\": container with ID starting with df7d90c7229d1b466c3f9c3a608f687a6ec8f77f32c174ac88a5cd9c3bea1cea not found: ID does not exist" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.337848 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-scripts" (OuterVolumeSpecName: "scripts") pod "e9ce9412-740d-4e20-8af5-58ebfc498c94" (UID: "e9ce9412-740d-4e20-8af5-58ebfc498c94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.356575 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55877bd6d-swpx2"] Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.371301 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9ce9412-740d-4e20-8af5-58ebfc498c94" (UID: "e9ce9412-740d-4e20-8af5-58ebfc498c94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.411972 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-public-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.412224 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9nkg\" (UniqueName: \"kubernetes.io/projected/688d82f6-748b-42fa-8595-b24a65ba77d3-kube-api-access-p9nkg\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.412317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-combined-ca-bundle\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.412386 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e650ce3a-8142-469f-bb17-116626c2141b-logs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.412537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-internal-tls-certs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.412718 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-config\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.412870 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-ovndb-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.412948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-internal-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.413090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-httpd-config\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414162 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-config-data\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414276 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-public-tls-certs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q87bj\" (UniqueName: \"kubernetes.io/projected/e650ce3a-8142-469f-bb17-116626c2141b-kube-api-access-q87bj\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-scripts\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414497 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-combined-ca-bundle\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414625 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414691 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414745 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndq6j\" (UniqueName: \"kubernetes.io/projected/e9ce9412-740d-4e20-8af5-58ebfc498c94-kube-api-access-ndq6j\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414808 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414896 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9ce9412-740d-4e20-8af5-58ebfc498c94-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.414958 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.459506 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-ovndb-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.464108 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-config-data" (OuterVolumeSpecName: "config-data") pod "e9ce9412-740d-4e20-8af5-58ebfc498c94" (UID: "e9ce9412-740d-4e20-8af5-58ebfc498c94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.464676 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9nkg\" (UniqueName: \"kubernetes.io/projected/688d82f6-748b-42fa-8595-b24a65ba77d3-kube-api-access-p9nkg\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.465760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-httpd-config\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.466624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-public-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.471092 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-internal-tls-certs\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.477706 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-combined-ca-bundle\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.492751 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-config\") pod \"neutron-9b94fb6bf-sxxmr\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.495395 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.520108 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-combined-ca-bundle\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.520933 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e650ce3a-8142-469f-bb17-116626c2141b-logs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.520982 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-internal-tls-certs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.521225 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-config-data\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.521296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-public-tls-certs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.521354 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q87bj\" (UniqueName: \"kubernetes.io/projected/e650ce3a-8142-469f-bb17-116626c2141b-kube-api-access-q87bj\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.521382 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-scripts\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.521430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e650ce3a-8142-469f-bb17-116626c2141b-logs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.522785 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ce9412-740d-4e20-8af5-58ebfc498c94-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.522860 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.525531 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-combined-ca-bundle\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.525842 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-config-data\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.527800 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-public-tls-certs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.527878 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-internal-tls-certs\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.528153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e650ce3a-8142-469f-bb17-116626c2141b-scripts\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.548858 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q87bj\" (UniqueName: \"kubernetes.io/projected/e650ce3a-8142-469f-bb17-116626c2141b-kube-api-access-q87bj\") pod \"placement-55877bd6d-swpx2\" (UID: \"e650ce3a-8142-469f-bb17-116626c2141b\") " pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.742663 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.743278 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.753517 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cb3f13-e839-473d-a14e-d4bd4512d00d" path="/var/lib/kubelet/pods/36cb3f13-e839-473d-a14e-d4bd4512d00d/volumes" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.834236 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.903842 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dz5x6"] Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.909429 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:47 crc kubenswrapper[4705]: I0124 08:01:47.912002 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dz5x6"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.063453 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v856d\" (UniqueName: \"kubernetes.io/projected/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-kube-api-access-v856d\") pod \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.066750 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-db-sync-config-data\") pod \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.067012 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-combined-ca-bundle\") pod \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\" (UID: \"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9\") " Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.097011 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" (UID: "bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.103363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-kube-api-access-v856d" (OuterVolumeSpecName: "kube-api-access-v856d") pod "bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" (UID: "bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9"). InnerVolumeSpecName "kube-api-access-v856d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.131605 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" (UID: "bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.169124 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.169184 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v856d\" (UniqueName: \"kubernetes.io/projected/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-kube-api-access-v856d\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.169202 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.188176 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" event={"ID":"5ee7d188-54dd-45e5-95f9-75b0bcef52d3","Type":"ContainerStarted","Data":"103f816b618e0a07eea41485dccd50ef744c4bccd3257ecdde6bfc77ca197def"} Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.192849 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.220215 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c44b4bd9-f2df-4ca0-8214-904cd04b10e8","Type":"ContainerStarted","Data":"bf8f0391836a531bd9f53a53b6d93ae8238a5b53c0c748d651572e0855fb24d1"} Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.230225 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-866968f4bc-4s76p"] Jan 24 08:01:48 crc kubenswrapper[4705]: E0124 08:01:48.231602 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" containerName="barbican-db-sync" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.231771 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" containerName="barbican-db-sync" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.232102 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" containerName="barbican-db-sync" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.233661 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.237247 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.249159 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-866968f4bc-4s76p"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.270666 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" podStartSLOduration=4.270616217 podStartE2EDuration="4.270616217s" podCreationTimestamp="2026-01-24 08:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:48.223640085 +0000 UTC m=+1246.943513373" watchObservedRunningTime="2026-01-24 08:01:48.270616217 +0000 UTC m=+1246.990489505" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.291100 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.291103 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9ce9412-740d-4e20-8af5-58ebfc498c94","Type":"ContainerDied","Data":"0725176db28d7ae9f9e7471957422f39b61b63a8fa80ba961004880b344247ed"} Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.291168 4705 scope.go:117] "RemoveContainer" containerID="13015c5180b7a5fe56d9cc75dd2c7cc86aa8463f2265b0623d4aed95278c3c44" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.297072 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.299545 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c492e887-1cb2-4b91-8f12-55e01657e02a-logs\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.299732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data-custom\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.301682 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72xr2\" (UniqueName: \"kubernetes.io/projected/c492e887-1cb2-4b91-8f12-55e01657e02a-kube-api-access-72xr2\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.301861 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-combined-ca-bundle\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.315366 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jrq4n" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.317228 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jrq4n" event={"ID":"bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9","Type":"ContainerDied","Data":"fb94e8eba020fd8e89d948fc981a5efc512350727877613ce7b060c2e7eb89aa"} Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.317446 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb94e8eba020fd8e89d948fc981a5efc512350727877613ce7b060c2e7eb89aa" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.368144 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d459d77f8-jpdgn"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.370926 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.381366 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d459d77f8-jpdgn"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.386091 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.396742 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.406650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfdcfb-291c-48dc-a111-6037cf854b1c-logs\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.406739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.406805 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zd5n\" (UniqueName: \"kubernetes.io/projected/f3dfdcfb-291c-48dc-a111-6037cf854b1c-kube-api-access-9zd5n\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.406925 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c492e887-1cb2-4b91-8f12-55e01657e02a-logs\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.406986 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.407026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data-custom\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.407096 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-combined-ca-bundle\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.407116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data-custom\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.407141 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72xr2\" (UniqueName: \"kubernetes.io/projected/c492e887-1cb2-4b91-8f12-55e01657e02a-kube-api-access-72xr2\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.407164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-combined-ca-bundle\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.416816 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.417001 4705 scope.go:117] "RemoveContainer" containerID="4f3eb92733f9b03af49fb39cfbb2debda278ec008867de3ba58349524e5c93db" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.417969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.418444 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c492e887-1cb2-4b91-8f12-55e01657e02a-logs\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.422789 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data-custom\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.429003 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-combined-ca-bundle\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.430446 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.432318 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.443478 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.448087 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.453993 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.455335 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72xr2\" (UniqueName: \"kubernetes.io/projected/c492e887-1cb2-4b91-8f12-55e01657e02a-kube-api-access-72xr2\") pod \"barbican-worker-866968f4bc-4s76p\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.497210 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55877bd6d-swpx2"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.506076 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-lcm9r"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.511869 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfdcfb-291c-48dc-a111-6037cf854b1c-logs\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.521522 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzksz\" (UniqueName: \"kubernetes.io/projected/2751ca08-c852-4a96-85db-d8ace6894326-kube-api-access-zzksz\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.521664 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-logs\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.521910 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-scripts\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.521993 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zd5n\" (UniqueName: \"kubernetes.io/projected/f3dfdcfb-291c-48dc-a111-6037cf854b1c-kube-api-access-9zd5n\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.522103 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.522338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.522469 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.522561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.522735 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-combined-ca-bundle\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.522831 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data-custom\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.522927 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-config-data\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.523086 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.518788 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfdcfb-291c-48dc-a111-6037cf854b1c-logs\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.540197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data-custom\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.541013 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-combined-ca-bundle\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.546838 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.551225 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-nkj6g"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.558522 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.578605 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-nkj6g"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.589995 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zd5n\" (UniqueName: \"kubernetes.io/projected/f3dfdcfb-291c-48dc-a111-6037cf854b1c-kube-api-access-9zd5n\") pod \"barbican-keystone-listener-5d459d77f8-jpdgn\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.597355 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.603575 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b898998b6-6mcbg"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.605363 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.610023 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628172 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-config-data\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628219 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-config\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628288 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628316 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data-custom\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-combined-ca-bundle\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628367 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628398 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628418 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpg9g\" (UniqueName: \"kubernetes.io/projected/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-kube-api-access-dpg9g\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628437 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzksz\" (UniqueName: \"kubernetes.io/projected/2751ca08-c852-4a96-85db-d8ace6894326-kube-api-access-zzksz\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-logs\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628507 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dcv9\" (UniqueName: \"kubernetes.io/projected/eb0f915f-127e-4dc3-8550-c75361485387-kube-api-access-2dcv9\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628538 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628574 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-scripts\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628662 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.628686 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-logs\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.633075 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.633130 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.639024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-config-data\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.639039 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-logs\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.655964 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.660012 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-scripts\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.669179 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzksz\" (UniqueName: \"kubernetes.io/projected/2751ca08-c852-4a96-85db-d8ace6894326-kube-api-access-zzksz\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.669799 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730379 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpg9g\" (UniqueName: \"kubernetes.io/projected/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-kube-api-access-dpg9g\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dcv9\" (UniqueName: \"kubernetes.io/projected/eb0f915f-127e-4dc3-8550-c75361485387-kube-api-access-2dcv9\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730510 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730589 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-logs\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-config\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730671 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data-custom\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730724 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-combined-ca-bundle\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730747 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.730769 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.733546 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-config\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.733772 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.734737 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.735687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-logs\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.736502 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.740267 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data-custom\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.740597 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-combined-ca-bundle\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.742724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.760502 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dcv9\" (UniqueName: \"kubernetes.io/projected/eb0f915f-127e-4dc3-8550-c75361485387-kube-api-access-2dcv9\") pod \"dnsmasq-dns-848cf88cfc-nkj6g\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.766434 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.768840 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b898998b6-6mcbg"] Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.774893 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpg9g\" (UniqueName: \"kubernetes.io/projected/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-kube-api-access-dpg9g\") pod \"barbican-api-b898998b6-6mcbg\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.779366 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.783400 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.788916 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.804143 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:48 crc kubenswrapper[4705]: I0124 08:01:48.809206 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.020574 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b94fb6bf-sxxmr"] Jan 24 08:01:49 crc kubenswrapper[4705]: W0124 08:01:49.115431 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod688d82f6_748b_42fa_8595_b24a65ba77d3.slice/crio-4ea4bbe7fa8daa74c24823206ff8d6c30fc26b0d5440a29b1d1d67364e2b2046 WatchSource:0}: Error finding container 4ea4bbe7fa8daa74c24823206ff8d6c30fc26b0d5440a29b1d1d67364e2b2046: Status 404 returned error can't find the container with id 4ea4bbe7fa8daa74c24823206ff8d6c30fc26b0d5440a29b1d1d67364e2b2046 Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.147595 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.283726 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-combined-ca-bundle\") pod \"5a444b75-4995-40f9-8432-b62814685b02\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.283934 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nclnh\" (UniqueName: \"kubernetes.io/projected/5a444b75-4995-40f9-8432-b62814685b02-kube-api-access-nclnh\") pod \"5a444b75-4995-40f9-8432-b62814685b02\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.284049 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-fernet-keys\") pod \"5a444b75-4995-40f9-8432-b62814685b02\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.284093 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-scripts\") pod \"5a444b75-4995-40f9-8432-b62814685b02\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.284114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-config-data\") pod \"5a444b75-4995-40f9-8432-b62814685b02\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.301007 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-credential-keys\") pod \"5a444b75-4995-40f9-8432-b62814685b02\" (UID: \"5a444b75-4995-40f9-8432-b62814685b02\") " Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.327610 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a444b75-4995-40f9-8432-b62814685b02-kube-api-access-nclnh" (OuterVolumeSpecName: "kube-api-access-nclnh") pod "5a444b75-4995-40f9-8432-b62814685b02" (UID: "5a444b75-4995-40f9-8432-b62814685b02"). InnerVolumeSpecName "kube-api-access-nclnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.328082 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5a444b75-4995-40f9-8432-b62814685b02" (UID: "5a444b75-4995-40f9-8432-b62814685b02"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.329565 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5a444b75-4995-40f9-8432-b62814685b02" (UID: "5a444b75-4995-40f9-8432-b62814685b02"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.334837 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-scripts" (OuterVolumeSpecName: "scripts") pod "5a444b75-4995-40f9-8432-b62814685b02" (UID: "5a444b75-4995-40f9-8432-b62814685b02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.384272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8ghj" event={"ID":"5a444b75-4995-40f9-8432-b62814685b02","Type":"ContainerDied","Data":"8519f0ed3fa037009edc28e2e04a47e02ed5ca7e8b44505e7316fda57297586d"} Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.384312 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8519f0ed3fa037009edc28e2e04a47e02ed5ca7e8b44505e7316fda57297586d" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.384375 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8ghj" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.407852 4705 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.408356 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nclnh\" (UniqueName: \"kubernetes.io/projected/5a444b75-4995-40f9-8432-b62814685b02-kube-api-access-nclnh\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.408374 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.408386 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.409429 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a444b75-4995-40f9-8432-b62814685b02" (UID: "5a444b75-4995-40f9-8432-b62814685b02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.413962 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-config-data" (OuterVolumeSpecName: "config-data") pod "5a444b75-4995-40f9-8432-b62814685b02" (UID: "5a444b75-4995-40f9-8432-b62814685b02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.421405 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-866968f4bc-4s76p"] Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.431333 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55877bd6d-swpx2" event={"ID":"e650ce3a-8142-469f-bb17-116626c2141b","Type":"ContainerStarted","Data":"bd84b744305d4e0e4908497d206e34a3babc8b1572e6ee00bf7e5e893ba67b5b"} Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.454439 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b94fb6bf-sxxmr" event={"ID":"688d82f6-748b-42fa-8595-b24a65ba77d3","Type":"ContainerStarted","Data":"4ea4bbe7fa8daa74c24823206ff8d6c30fc26b0d5440a29b1d1d67364e2b2046"} Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.511008 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.511048 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a444b75-4995-40f9-8432-b62814685b02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.625926 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385af144-eae2-4843-804d-cd7c42a6b833" path="/var/lib/kubelet/pods/385af144-eae2-4843-804d-cd7c42a6b833/volumes" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.627080 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ce9412-740d-4e20-8af5-58ebfc498c94" path="/var/lib/kubelet/pods/e9ce9412-740d-4e20-8af5-58ebfc498c94/volumes" Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.937558 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-nkj6g"] Jan 24 08:01:49 crc kubenswrapper[4705]: I0124 08:01:49.953366 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b898998b6-6mcbg"] Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.068999 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.131150 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d459d77f8-jpdgn"] Jan 24 08:01:50 crc kubenswrapper[4705]: W0124 08:01:50.150311 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3dfdcfb_291c_48dc_a111_6037cf854b1c.slice/crio-3ae1d3a752ae87f5ceb046433a968f0d4ac7d9cf106d84beab45dacc2aa251f7 WatchSource:0}: Error finding container 3ae1d3a752ae87f5ceb046433a968f0d4ac7d9cf106d84beab45dacc2aa251f7: Status 404 returned error can't find the container with id 3ae1d3a752ae87f5ceb046433a968f0d4ac7d9cf106d84beab45dacc2aa251f7 Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.306281 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c7b7bd5d5-rdcz5"] Jan 24 08:01:50 crc kubenswrapper[4705]: E0124 08:01:50.307152 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a444b75-4995-40f9-8432-b62814685b02" containerName="keystone-bootstrap" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.307167 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a444b75-4995-40f9-8432-b62814685b02" containerName="keystone-bootstrap" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.307362 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a444b75-4995-40f9-8432-b62814685b02" containerName="keystone-bootstrap" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.308086 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.314421 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.314666 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gs7v" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.314670 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.314771 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.315123 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.315327 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.345042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-combined-ca-bundle\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.348503 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-credential-keys\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.348654 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncgk\" (UniqueName: \"kubernetes.io/projected/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-kube-api-access-hncgk\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.348871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-scripts\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.350385 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-config-data\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.350459 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-fernet-keys\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.350535 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-internal-tls-certs\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.350632 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-public-tls-certs\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.363449 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c7b7bd5d5-rdcz5"] Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457457 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-config-data\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457532 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-fernet-keys\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457574 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-internal-tls-certs\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-public-tls-certs\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457675 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-combined-ca-bundle\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457718 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-credential-keys\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457760 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hncgk\" (UniqueName: \"kubernetes.io/projected/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-kube-api-access-hncgk\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.457894 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-scripts\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.468078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-config-data\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.475498 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-public-tls-certs\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.476404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-fernet-keys\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.480575 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-credential-keys\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.492765 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-scripts\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.494394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-combined-ca-bundle\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.499723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-internal-tls-certs\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.507699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hncgk\" (UniqueName: \"kubernetes.io/projected/d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2-kube-api-access-hncgk\") pod \"keystone-5c7b7bd5d5-rdcz5\" (UID: \"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2\") " pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.516715 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" event={"ID":"eb0f915f-127e-4dc3-8550-c75361485387","Type":"ContainerStarted","Data":"704a14648cbdc27939c72ccf1ef222c0177f56b82911053838115f05d9b0658b"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.523265 4705 generic.go:334] "Generic (PLEG): container finished" podID="e66c9c6e-cac9-4e99-b4d7-532f87f30ada" containerID="af1891f6e9ec6e86a3f010a149959b79147b36c8417b8953f50a9e9358d38436" exitCode=0 Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.523625 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tdcf8" event={"ID":"e66c9c6e-cac9-4e99-b4d7-532f87f30ada","Type":"ContainerDied","Data":"af1891f6e9ec6e86a3f010a149959b79147b36c8417b8953f50a9e9358d38436"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.526529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866968f4bc-4s76p" event={"ID":"c492e887-1cb2-4b91-8f12-55e01657e02a","Type":"ContainerStarted","Data":"8dee13bbca499368baf62d10827b632488f2372bd734d338e229b35b6eb5b17f"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.536246 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b94fb6bf-sxxmr" event={"ID":"688d82f6-748b-42fa-8595-b24a65ba77d3","Type":"ContainerStarted","Data":"a3098359904e16f8784ce49d0d993a5c7f30ea1518d1ed57f78d203e2b46e56f"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.536314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b94fb6bf-sxxmr" event={"ID":"688d82f6-748b-42fa-8595-b24a65ba77d3","Type":"ContainerStarted","Data":"0ba3fb74191edff069c8e1acc25b146de11147efac97ca219f87b27119c5b1f8"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.536711 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.542648 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2751ca08-c852-4a96-85db-d8ace6894326","Type":"ContainerStarted","Data":"6aee62d3ae4e65139b61f5a936c1792f87a624f4b31a31cdf704f257a202a256"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.550627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" event={"ID":"f3dfdcfb-291c-48dc-a111-6037cf854b1c","Type":"ContainerStarted","Data":"3ae1d3a752ae87f5ceb046433a968f0d4ac7d9cf106d84beab45dacc2aa251f7"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.587246 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9b94fb6bf-sxxmr" podStartSLOduration=3.587217474 podStartE2EDuration="3.587217474s" podCreationTimestamp="2026-01-24 08:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:50.572123942 +0000 UTC m=+1249.291997220" watchObservedRunningTime="2026-01-24 08:01:50.587217474 +0000 UTC m=+1249.307090762" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.599568 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c44b4bd9-f2df-4ca0-8214-904cd04b10e8","Type":"ContainerStarted","Data":"b674941e119859e5e54ce45d0088310944f038d162d1e1382fc6e86307f7087b"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.649719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b898998b6-6mcbg" event={"ID":"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c","Type":"ContainerStarted","Data":"63d57089549cd123dd8dc839d2fb6de34c4902814e6f210a525b134f8e4f448a"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.664763 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerName="dnsmasq-dns" containerID="cri-o://103f816b618e0a07eea41485dccd50ef744c4bccd3257ecdde6bfc77ca197def" gracePeriod=10 Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.666974 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55877bd6d-swpx2" event={"ID":"e650ce3a-8142-469f-bb17-116626c2141b","Type":"ContainerStarted","Data":"9963cdf285d9224d239847508c0285fc4ec31124083d807e45083479922c9e7e"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.667094 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55877bd6d-swpx2" event={"ID":"e650ce3a-8142-469f-bb17-116626c2141b","Type":"ContainerStarted","Data":"3cc89029d6bf40d6a6e3b3a1be6e59c24612c68014c08e5ea040fa72abff5ef8"} Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.668862 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.669248 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.726038 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55877bd6d-swpx2" podStartSLOduration=3.726009881 podStartE2EDuration="3.726009881s" podCreationTimestamp="2026-01-24 08:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:50.691863437 +0000 UTC m=+1249.411736725" watchObservedRunningTime="2026-01-24 08:01:50.726009881 +0000 UTC m=+1249.445883169" Jan 24 08:01:50 crc kubenswrapper[4705]: I0124 08:01:50.750266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.218358 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6698559bb9-vn9c8"] Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.221412 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.236726 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-dbf49b754-xk8bz"] Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.239105 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.255648 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-dbf49b754-xk8bz"] Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.273869 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6698559bb9-vn9c8"] Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.320754 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-config-data-custom\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.321342 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6328de33-ec5c-402a-aece-9b944c259b59-logs\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.321510 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9m7\" (UniqueName: \"kubernetes.io/projected/6328de33-ec5c-402a-aece-9b944c259b59-kube-api-access-6v9m7\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.321754 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-combined-ca-bundle\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.322085 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-config-data-custom\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.322297 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-logs\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.322659 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-config-data\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.322958 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-combined-ca-bundle\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.323057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwl8\" (UniqueName: \"kubernetes.io/projected/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-kube-api-access-ptwl8\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.323160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-config-data\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.397214 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-68d4559598-l78mv"] Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.399791 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.417548 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68d4559598-l78mv"] Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.425879 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-combined-ca-bundle\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426002 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwl8\" (UniqueName: \"kubernetes.io/projected/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-kube-api-access-ptwl8\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-config-data\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426142 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-config-data-custom\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426175 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6328de33-ec5c-402a-aece-9b944c259b59-logs\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v9m7\" (UniqueName: \"kubernetes.io/projected/6328de33-ec5c-402a-aece-9b944c259b59-kube-api-access-6v9m7\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c7b7bd5d5-rdcz5"] Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-combined-ca-bundle\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-config-data-custom\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426301 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-logs\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.426360 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-config-data\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.427706 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6328de33-ec5c-402a-aece-9b944c259b59-logs\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.428172 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-logs\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.442521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-config-data-custom\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.443437 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-config-data\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.458449 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-config-data-custom\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.467082 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwl8\" (UniqueName: \"kubernetes.io/projected/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-kube-api-access-ptwl8\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: W0124 08:01:51.469471 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f2747d_33dc_4eb9_89d2_2c6f2907d4e2.slice/crio-2139370eb63f3d32e5bcf41db1c2c1c118d26c393a8d73f8925a937fcbfb9607 WatchSource:0}: Error finding container 2139370eb63f3d32e5bcf41db1c2c1c118d26c393a8d73f8925a937fcbfb9607: Status 404 returned error can't find the container with id 2139370eb63f3d32e5bcf41db1c2c1c118d26c393a8d73f8925a937fcbfb9607 Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.469912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-combined-ca-bundle\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.470931 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6328de33-ec5c-402a-aece-9b944c259b59-combined-ca-bundle\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.472404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e-config-data\") pod \"barbican-keystone-listener-dbf49b754-xk8bz\" (UID: \"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e\") " pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.481383 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v9m7\" (UniqueName: \"kubernetes.io/projected/6328de33-ec5c-402a-aece-9b944c259b59-kube-api-access-6v9m7\") pod \"barbican-worker-6698559bb9-vn9c8\" (UID: \"6328de33-ec5c-402a-aece-9b944c259b59\") " pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.529004 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-combined-ca-bundle\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.529235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-logs\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.529403 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vql\" (UniqueName: \"kubernetes.io/projected/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-kube-api-access-68vql\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.529722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data-custom\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.529946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.603481 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6698559bb9-vn9c8" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.632550 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.632726 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-combined-ca-bundle\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.632917 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-logs\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.632960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68vql\" (UniqueName: \"kubernetes.io/projected/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-kube-api-access-68vql\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.633063 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data-custom\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.634022 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-logs\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.638200 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.639246 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data-custom\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.640334 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-combined-ca-bundle\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.644405 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.661566 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68vql\" (UniqueName: \"kubernetes.io/projected/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-kube-api-access-68vql\") pod \"barbican-api-68d4559598-l78mv\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.700144 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c7b7bd5d5-rdcz5" event={"ID":"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2","Type":"ContainerStarted","Data":"2139370eb63f3d32e5bcf41db1c2c1c118d26c393a8d73f8925a937fcbfb9607"} Jan 24 08:01:51 crc kubenswrapper[4705]: I0124 08:01:51.935121 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.505444 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b898998b6-6mcbg"] Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.654680 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55f654f7bb-65t7w"] Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.658111 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.664604 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.665035 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.794475 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c44b4bd9-f2df-4ca0-8214-904cd04b10e8","Type":"ContainerStarted","Data":"a0a4d36097fa0829035badbb6b222043f52ffb9ea2ba7bc665f21f77dfe06563"} Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.828966 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f654f7bb-65t7w"] Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.831038 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-config-data\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.831228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-combined-ca-bundle\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.831411 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-internal-tls-certs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.831457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gg9j\" (UniqueName: \"kubernetes.io/projected/9cefd3d6-3762-41d6-adc7-31134fde2bb7-kube-api-access-2gg9j\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.831515 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-public-tls-certs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.831707 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-config-data-custom\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.832062 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cefd3d6-3762-41d6-adc7-31134fde2bb7-logs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.884251 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b898998b6-6mcbg" event={"ID":"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c","Type":"ContainerStarted","Data":"84c92ee74874ce67e881123994ac0468aaeeca92bf89b8294cfc5fb9ce961c28"} Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.926941 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2751ca08-c852-4a96-85db-d8ace6894326","Type":"ContainerStarted","Data":"a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3"} Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.940268 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-config-data-custom\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.940425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cefd3d6-3762-41d6-adc7-31134fde2bb7-logs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.940500 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-config-data\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.940589 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-combined-ca-bundle\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.940620 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-internal-tls-certs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.940641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gg9j\" (UniqueName: \"kubernetes.io/projected/9cefd3d6-3762-41d6-adc7-31134fde2bb7-kube-api-access-2gg9j\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.940667 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-public-tls-certs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.946089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cefd3d6-3762-41d6-adc7-31134fde2bb7-logs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.952212 4705 generic.go:334] "Generic (PLEG): container finished" podID="eb0f915f-127e-4dc3-8550-c75361485387" containerID="5c534a84563afb2f34607f22b91fbd671924bb9f51967452c5d733fae08a3c68" exitCode=0 Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.952387 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" event={"ID":"eb0f915f-127e-4dc3-8550-c75361485387","Type":"ContainerDied","Data":"5c534a84563afb2f34607f22b91fbd671924bb9f51967452c5d733fae08a3c68"} Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.956924 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-config-data-custom\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.962521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-public-tls-certs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.963525 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-internal-tls-certs\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.979070 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-combined-ca-bundle\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.987568 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cefd3d6-3762-41d6-adc7-31134fde2bb7-config-data\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.991558 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tdcf8" event={"ID":"e66c9c6e-cac9-4e99-b4d7-532f87f30ada","Type":"ContainerDied","Data":"fe1f58e4b2c7752839a891627839854b848eaa81ff67813462ee13f7f4287db9"} Jan 24 08:01:52 crc kubenswrapper[4705]: I0124 08:01:52.991635 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe1f58e4b2c7752839a891627839854b848eaa81ff67813462ee13f7f4287db9" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.014304 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gg9j\" (UniqueName: \"kubernetes.io/projected/9cefd3d6-3762-41d6-adc7-31134fde2bb7-kube-api-access-2gg9j\") pod \"barbican-api-55f654f7bb-65t7w\" (UID: \"9cefd3d6-3762-41d6-adc7-31134fde2bb7\") " pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.063246 4705 generic.go:334] "Generic (PLEG): container finished" podID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerID="103f816b618e0a07eea41485dccd50ef744c4bccd3257ecdde6bfc77ca197def" exitCode=0 Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.063727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" event={"ID":"5ee7d188-54dd-45e5-95f9-75b0bcef52d3","Type":"ContainerDied","Data":"103f816b618e0a07eea41485dccd50ef744c4bccd3257ecdde6bfc77ca197def"} Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.066723 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.06669395 podStartE2EDuration="7.06669395s" podCreationTimestamp="2026-01-24 08:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:52.916530875 +0000 UTC m=+1251.636404163" watchObservedRunningTime="2026-01-24 08:01:53.06669395 +0000 UTC m=+1251.786567238" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.099004 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:53 crc kubenswrapper[4705]: E0124 08:01:53.102504 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb0f915f_127e_4dc3_8550_c75361485387.slice/crio-5c534a84563afb2f34607f22b91fbd671924bb9f51967452c5d733fae08a3c68.scope\": RecentStats: unable to find data in memory cache]" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.166030 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-config-data\") pod \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.166112 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpddl\" (UniqueName: \"kubernetes.io/projected/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-kube-api-access-bpddl\") pod \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.166408 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-combined-ca-bundle\") pod \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\" (UID: \"e66c9c6e-cac9-4e99-b4d7-532f87f30ada\") " Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.171299 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.202328 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-kube-api-access-bpddl" (OuterVolumeSpecName: "kube-api-access-bpddl") pod "e66c9c6e-cac9-4e99-b4d7-532f87f30ada" (UID: "e66c9c6e-cac9-4e99-b4d7-532f87f30ada"). InnerVolumeSpecName "kube-api-access-bpddl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.271268 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpddl\" (UniqueName: \"kubernetes.io/projected/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-kube-api-access-bpddl\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.359550 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e66c9c6e-cac9-4e99-b4d7-532f87f30ada" (UID: "e66c9c6e-cac9-4e99-b4d7-532f87f30ada"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.378593 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.474838 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-config-data" (OuterVolumeSpecName: "config-data") pod "e66c9c6e-cac9-4e99-b4d7-532f87f30ada" (UID: "e66c9c6e-cac9-4e99-b4d7-532f87f30ada"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.481954 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c9c6e-cac9-4e99-b4d7-532f87f30ada-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:53 crc kubenswrapper[4705]: I0124 08:01:53.752369 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-dbf49b754-xk8bz"] Jan 24 08:01:54 crc kubenswrapper[4705]: I0124 08:01:54.107601 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tdcf8" Jan 24 08:01:54 crc kubenswrapper[4705]: I0124 08:01:54.107583 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c7b7bd5d5-rdcz5" event={"ID":"d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2","Type":"ContainerStarted","Data":"0f1e3527022bc1e98ccb397e09f76e8c6a480209c1d18c624db9ce89ac3a8ff7"} Jan 24 08:01:54 crc kubenswrapper[4705]: I0124 08:01:54.145725 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c7b7bd5d5-rdcz5" podStartSLOduration=4.145693163 podStartE2EDuration="4.145693163s" podCreationTimestamp="2026-01-24 08:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:54.140281332 +0000 UTC m=+1252.860154620" watchObservedRunningTime="2026-01-24 08:01:54.145693163 +0000 UTC m=+1252.865566451" Jan 24 08:01:55 crc kubenswrapper[4705]: I0124 08:01:55.115860 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:01:55 crc kubenswrapper[4705]: I0124 08:01:55.940972 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.069880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-sb\") pod \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.069963 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-swift-storage-0\") pod \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.070060 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-nb\") pod \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.070369 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-svc\") pod \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.070399 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-config\") pod \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.070642 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r8nd\" (UniqueName: \"kubernetes.io/projected/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-kube-api-access-5r8nd\") pod \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\" (UID: \"5ee7d188-54dd-45e5-95f9-75b0bcef52d3\") " Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.092949 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-kube-api-access-5r8nd" (OuterVolumeSpecName: "kube-api-access-5r8nd") pod "5ee7d188-54dd-45e5-95f9-75b0bcef52d3" (UID: "5ee7d188-54dd-45e5-95f9-75b0bcef52d3"). InnerVolumeSpecName "kube-api-access-5r8nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.128898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ee7d188-54dd-45e5-95f9-75b0bcef52d3" (UID: "5ee7d188-54dd-45e5-95f9-75b0bcef52d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.136055 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" event={"ID":"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e","Type":"ContainerStarted","Data":"b4c1bc436d3b6f9cfa01d466d5f25808dab4cb2fea75c11b0165885c549873cb"} Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.137431 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ee7d188-54dd-45e5-95f9-75b0bcef52d3" (UID: "5ee7d188-54dd-45e5-95f9-75b0bcef52d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.146696 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.147105 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" event={"ID":"5ee7d188-54dd-45e5-95f9-75b0bcef52d3","Type":"ContainerDied","Data":"423cfb3fa77200ebe2b1ddfe6947fe6215d0d6aef96716447dce0974bd844638"} Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.147475 4705 scope.go:117] "RemoveContainer" containerID="103f816b618e0a07eea41485dccd50ef744c4bccd3257ecdde6bfc77ca197def" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.147174 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ee7d188-54dd-45e5-95f9-75b0bcef52d3" (UID: "5ee7d188-54dd-45e5-95f9-75b0bcef52d3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.147244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ee7d188-54dd-45e5-95f9-75b0bcef52d3" (UID: "5ee7d188-54dd-45e5-95f9-75b0bcef52d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.171305 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-config" (OuterVolumeSpecName: "config") pod "5ee7d188-54dd-45e5-95f9-75b0bcef52d3" (UID: "5ee7d188-54dd-45e5-95f9-75b0bcef52d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.173957 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.174002 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.174014 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.174026 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.174036 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.174045 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r8nd\" (UniqueName: \"kubernetes.io/projected/5ee7d188-54dd-45e5-95f9-75b0bcef52d3-kube-api-access-5r8nd\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.399918 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6698559bb9-vn9c8"] Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.507933 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-lcm9r"] Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.518305 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-lcm9r"] Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.870015 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.870202 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.928328 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:56 crc kubenswrapper[4705]: I0124 08:01:56.935214 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:57 crc kubenswrapper[4705]: I0124 08:01:57.105843 4705 scope.go:117] "RemoveContainer" containerID="810d4d6088edda10ba93ccd3cc719cbe45a69480a7a447a5b2df5305da61fbc5" Jan 24 08:01:57 crc kubenswrapper[4705]: I0124 08:01:57.243346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6698559bb9-vn9c8" event={"ID":"6328de33-ec5c-402a-aece-9b944c259b59","Type":"ContainerStarted","Data":"13c8a3cadd0a3ea03dc1323ada4dbd5219cabc5c2637a3b70e135d0701009bce"} Jan 24 08:01:57 crc kubenswrapper[4705]: I0124 08:01:57.243460 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:57 crc kubenswrapper[4705]: I0124 08:01:57.243661 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 08:01:57 crc kubenswrapper[4705]: I0124 08:01:57.550731 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68d4559598-l78mv"] Jan 24 08:01:57 crc kubenswrapper[4705]: I0124 08:01:57.614792 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" path="/var/lib/kubelet/pods/5ee7d188-54dd-45e5-95f9-75b0bcef52d3/volumes" Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.068081 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f654f7bb-65t7w"] Jan 24 08:01:58 crc kubenswrapper[4705]: W0124 08:01:58.116862 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cefd3d6_3762_41d6_adc7_31134fde2bb7.slice/crio-3b87d246ce3e2fe7f5ced318397e6ef1d938f92aa357cab9452a37fb37e36fca WatchSource:0}: Error finding container 3b87d246ce3e2fe7f5ced318397e6ef1d938f92aa357cab9452a37fb37e36fca: Status 404 returned error can't find the container with id 3b87d246ce3e2fe7f5ced318397e6ef1d938f92aa357cab9452a37fb37e36fca Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.269622 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866968f4bc-4s76p" event={"ID":"c492e887-1cb2-4b91-8f12-55e01657e02a","Type":"ContainerStarted","Data":"85fb2113b63a40d45d189c03d5aa6b3c1d07f92cf4ffc596e9bb2300b02fba99"} Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.277388 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68d4559598-l78mv" event={"ID":"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b","Type":"ContainerStarted","Data":"69c5e8d5ea3aeeecfdf3393719935a80c81fd76bc1281982e7ee87c02110434b"} Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.313505 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b898998b6-6mcbg" event={"ID":"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c","Type":"ContainerStarted","Data":"c75b1a3b44af21d27f00f758bbb9d217b7294990931cfe2d18afdeebb36a846e"} Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.313684 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" containerID="cri-o://84c92ee74874ce67e881123994ac0468aaeeca92bf89b8294cfc5fb9ce961c28" gracePeriod=30 Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.314488 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.314517 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.314596 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api" containerID="cri-o://c75b1a3b44af21d27f00f758bbb9d217b7294990931cfe2d18afdeebb36a846e" gracePeriod=30 Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.324409 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" event={"ID":"eb0f915f-127e-4dc3-8550-c75361485387","Type":"ContainerStarted","Data":"7c3e84abfad6929b35d2ca75ae93c70e05615a30d6e9d7c234da8fcc4d342e6e"} Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.324487 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.342409 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b898998b6-6mcbg" podStartSLOduration=10.342386161 podStartE2EDuration="10.342386161s" podCreationTimestamp="2026-01-24 08:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:58.342167925 +0000 UTC m=+1257.062041223" watchObservedRunningTime="2026-01-24 08:01:58.342386161 +0000 UTC m=+1257.062259449" Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.385167 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" podStartSLOduration=10.385143996 podStartE2EDuration="10.385143996s" podCreationTimestamp="2026-01-24 08:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:58.375727993 +0000 UTC m=+1257.095601281" watchObservedRunningTime="2026-01-24 08:01:58.385143996 +0000 UTC m=+1257.105017284" Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.385615 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.385940 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6698559bb9-vn9c8" event={"ID":"6328de33-ec5c-402a-aece-9b944c259b59","Type":"ContainerStarted","Data":"5a7eabf0a84ddce6cdded5cce839857c8c597890ed8a7b04b07b0d86424db027"} Jan 24 08:01:58 crc kubenswrapper[4705]: I0124 08:01:58.428130 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f654f7bb-65t7w" event={"ID":"9cefd3d6-3762-41d6-adc7-31134fde2bb7","Type":"ContainerStarted","Data":"3b87d246ce3e2fe7f5ced318397e6ef1d938f92aa357cab9452a37fb37e36fca"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.449977 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2751ca08-c852-4a96-85db-d8ace6894326","Type":"ContainerStarted","Data":"c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.456880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6698559bb9-vn9c8" event={"ID":"6328de33-ec5c-402a-aece-9b944c259b59","Type":"ContainerStarted","Data":"a9b60b1c58a9c3609a51abecf2be4b3fde66d01488f6e7c64a2fd5635ebae529"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.461484 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" event={"ID":"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e","Type":"ContainerStarted","Data":"b9947a5ada9028cdcb12bf546ccb871a06aae07f9a36b137fa78dbc1778f176f"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.461543 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" event={"ID":"85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e","Type":"ContainerStarted","Data":"f00db84a269db417443ade0c519d7e160e77f8de836265edbdebca2cbb0564bc"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.464589 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" event={"ID":"f3dfdcfb-291c-48dc-a111-6037cf854b1c","Type":"ContainerStarted","Data":"c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.464623 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" event={"ID":"f3dfdcfb-291c-48dc-a111-6037cf854b1c","Type":"ContainerStarted","Data":"6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.468462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f654f7bb-65t7w" event={"ID":"9cefd3d6-3762-41d6-adc7-31134fde2bb7","Type":"ContainerStarted","Data":"e458113d40b7952433b84b4857a8ea1348fc0f31b438e74d83ec066870a08b91"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.468505 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f654f7bb-65t7w" event={"ID":"9cefd3d6-3762-41d6-adc7-31134fde2bb7","Type":"ContainerStarted","Data":"c500b38c35cdbb40224f04cb6afed5a450e6a10425b9be51cd783cb7e576f581"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.468677 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.470770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerStarted","Data":"3102b2fef3f57292cb5b6b9258bc414e29b75b66237f8a94e7a97fd6f4617f74"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.477422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866968f4bc-4s76p" event={"ID":"c492e887-1cb2-4b91-8f12-55e01657e02a","Type":"ContainerStarted","Data":"0809ebe6d700dd6b24e257402c90ef30f09cd9268c453a05d5dab36b68a8d7a1"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.485640 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68d4559598-l78mv" event={"ID":"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b","Type":"ContainerStarted","Data":"7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.485711 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68d4559598-l78mv" event={"ID":"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b","Type":"ContainerStarted","Data":"12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.487000 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.487153 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.495493 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lvscj" event={"ID":"fd93ff70-0f51-4af8-9a10-6407f4901667","Type":"ContainerStarted","Data":"e80d6cd79edee3a05bdaf723c476e8d24e9a85f1bd291743e5c0e0e55c7221e3"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.513062 4705 generic.go:334] "Generic (PLEG): container finished" podID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerID="84c92ee74874ce67e881123994ac0468aaeeca92bf89b8294cfc5fb9ce961c28" exitCode=143 Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.513120 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b898998b6-6mcbg" event={"ID":"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c","Type":"ContainerDied","Data":"84c92ee74874ce67e881123994ac0468aaeeca92bf89b8294cfc5fb9ce961c28"} Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.514230 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.514206746 podStartE2EDuration="11.514206746s" podCreationTimestamp="2026-01-24 08:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:59.473691345 +0000 UTC m=+1258.193564653" watchObservedRunningTime="2026-01-24 08:01:59.514206746 +0000 UTC m=+1258.234080034" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.531163 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6698559bb9-vn9c8" podStartSLOduration=8.531123069 podStartE2EDuration="8.531123069s" podCreationTimestamp="2026-01-24 08:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:59.50432066 +0000 UTC m=+1258.224193938" watchObservedRunningTime="2026-01-24 08:01:59.531123069 +0000 UTC m=+1258.250996357" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.599465 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" podStartSLOduration=4.439441337 podStartE2EDuration="11.599436288s" podCreationTimestamp="2026-01-24 08:01:48 +0000 UTC" firstStartedPulling="2026-01-24 08:01:50.167438957 +0000 UTC m=+1248.887312245" lastFinishedPulling="2026-01-24 08:01:57.327433908 +0000 UTC m=+1256.047307196" observedRunningTime="2026-01-24 08:01:59.532510358 +0000 UTC m=+1258.252383646" watchObservedRunningTime="2026-01-24 08:01:59.599436288 +0000 UTC m=+1258.319309576" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.619965 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-lcm9r" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: i/o timeout" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.651453 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-866968f4bc-4s76p"] Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.665100 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-dbf49b754-xk8bz" podStartSLOduration=6.6718174569999995 podStartE2EDuration="8.665072811s" podCreationTimestamp="2026-01-24 08:01:51 +0000 UTC" firstStartedPulling="2026-01-24 08:01:55.840861068 +0000 UTC m=+1254.560734356" lastFinishedPulling="2026-01-24 08:01:57.834116412 +0000 UTC m=+1256.553989710" observedRunningTime="2026-01-24 08:01:59.573624286 +0000 UTC m=+1258.293497604" watchObservedRunningTime="2026-01-24 08:01:59.665072811 +0000 UTC m=+1258.384946109" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.707521 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55f654f7bb-65t7w" podStartSLOduration=7.7075031769999995 podStartE2EDuration="7.707503177s" podCreationTimestamp="2026-01-24 08:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:59.597703359 +0000 UTC m=+1258.317576657" watchObservedRunningTime="2026-01-24 08:01:59.707503177 +0000 UTC m=+1258.427376465" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.724998 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5d459d77f8-jpdgn"] Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.731801 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-lvscj" podStartSLOduration=4.890889323 podStartE2EDuration="58.731773795s" podCreationTimestamp="2026-01-24 08:01:01 +0000 UTC" firstStartedPulling="2026-01-24 08:01:03.502303556 +0000 UTC m=+1202.222176844" lastFinishedPulling="2026-01-24 08:01:57.343188028 +0000 UTC m=+1256.063061316" observedRunningTime="2026-01-24 08:01:59.651305626 +0000 UTC m=+1258.371178924" watchObservedRunningTime="2026-01-24 08:01:59.731773795 +0000 UTC m=+1258.451647083" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.740960 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-866968f4bc-4s76p" podStartSLOduration=4.07514818 podStartE2EDuration="11.740931s" podCreationTimestamp="2026-01-24 08:01:48 +0000 UTC" firstStartedPulling="2026-01-24 08:01:49.443787291 +0000 UTC m=+1248.163660579" lastFinishedPulling="2026-01-24 08:01:57.109570111 +0000 UTC m=+1255.829443399" observedRunningTime="2026-01-24 08:01:59.689571066 +0000 UTC m=+1258.409444364" watchObservedRunningTime="2026-01-24 08:01:59.740931 +0000 UTC m=+1258.460804288" Jan 24 08:01:59 crc kubenswrapper[4705]: I0124 08:01:59.797639 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-68d4559598-l78mv" podStartSLOduration=8.797594853 podStartE2EDuration="8.797594853s" podCreationTimestamp="2026-01-24 08:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:59.720432368 +0000 UTC m=+1258.440305656" watchObservedRunningTime="2026-01-24 08:01:59.797594853 +0000 UTC m=+1258.517468161" Jan 24 08:02:00 crc kubenswrapper[4705]: I0124 08:02:00.530208 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:02:00 crc kubenswrapper[4705]: I0124 08:02:00.833914 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 08:02:00 crc kubenswrapper[4705]: I0124 08:02:00.834268 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 08:02:01 crc kubenswrapper[4705]: I0124 08:02:01.210043 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 08:02:01 crc kubenswrapper[4705]: I0124 08:02:01.538131 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener-log" containerID="cri-o://c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072" gracePeriod=30 Jan 24 08:02:01 crc kubenswrapper[4705]: I0124 08:02:01.538526 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener" containerID="cri-o://6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe" gracePeriod=30 Jan 24 08:02:01 crc kubenswrapper[4705]: I0124 08:02:01.539359 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-866968f4bc-4s76p" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker-log" containerID="cri-o://85fb2113b63a40d45d189c03d5aa6b3c1d07f92cf4ffc596e9bb2300b02fba99" gracePeriod=30 Jan 24 08:02:01 crc kubenswrapper[4705]: I0124 08:02:01.539507 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-866968f4bc-4s76p" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker" containerID="cri-o://0809ebe6d700dd6b24e257402c90ef30f09cd9268c453a05d5dab36b68a8d7a1" gracePeriod=30 Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.556295 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" event={"ID":"f3dfdcfb-291c-48dc-a111-6037cf854b1c","Type":"ContainerDied","Data":"c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072"} Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.556197 4705 generic.go:334] "Generic (PLEG): container finished" podID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerID="c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072" exitCode=143 Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.561056 4705 generic.go:334] "Generic (PLEG): container finished" podID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerID="0809ebe6d700dd6b24e257402c90ef30f09cd9268c453a05d5dab36b68a8d7a1" exitCode=0 Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.561089 4705 generic.go:334] "Generic (PLEG): container finished" podID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerID="85fb2113b63a40d45d189c03d5aa6b3c1d07f92cf4ffc596e9bb2300b02fba99" exitCode=143 Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.561113 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866968f4bc-4s76p" event={"ID":"c492e887-1cb2-4b91-8f12-55e01657e02a","Type":"ContainerDied","Data":"0809ebe6d700dd6b24e257402c90ef30f09cd9268c453a05d5dab36b68a8d7a1"} Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.561141 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866968f4bc-4s76p" event={"ID":"c492e887-1cb2-4b91-8f12-55e01657e02a","Type":"ContainerDied","Data":"85fb2113b63a40d45d189c03d5aa6b3c1d07f92cf4ffc596e9bb2300b02fba99"} Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.809415 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.909401 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-combined-ca-bundle\") pod \"c492e887-1cb2-4b91-8f12-55e01657e02a\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.909519 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c492e887-1cb2-4b91-8f12-55e01657e02a-logs\") pod \"c492e887-1cb2-4b91-8f12-55e01657e02a\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.909651 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data\") pod \"c492e887-1cb2-4b91-8f12-55e01657e02a\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.909703 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72xr2\" (UniqueName: \"kubernetes.io/projected/c492e887-1cb2-4b91-8f12-55e01657e02a-kube-api-access-72xr2\") pod \"c492e887-1cb2-4b91-8f12-55e01657e02a\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.909739 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data-custom\") pod \"c492e887-1cb2-4b91-8f12-55e01657e02a\" (UID: \"c492e887-1cb2-4b91-8f12-55e01657e02a\") " Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.911961 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c492e887-1cb2-4b91-8f12-55e01657e02a-logs" (OuterVolumeSpecName: "logs") pod "c492e887-1cb2-4b91-8f12-55e01657e02a" (UID: "c492e887-1cb2-4b91-8f12-55e01657e02a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.918043 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c492e887-1cb2-4b91-8f12-55e01657e02a" (UID: "c492e887-1cb2-4b91-8f12-55e01657e02a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.940267 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c492e887-1cb2-4b91-8f12-55e01657e02a-kube-api-access-72xr2" (OuterVolumeSpecName: "kube-api-access-72xr2") pod "c492e887-1cb2-4b91-8f12-55e01657e02a" (UID: "c492e887-1cb2-4b91-8f12-55e01657e02a"). InnerVolumeSpecName "kube-api-access-72xr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.957668 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c492e887-1cb2-4b91-8f12-55e01657e02a" (UID: "c492e887-1cb2-4b91-8f12-55e01657e02a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:02 crc kubenswrapper[4705]: I0124 08:02:02.976063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data" (OuterVolumeSpecName: "config-data") pod "c492e887-1cb2-4b91-8f12-55e01657e02a" (UID: "c492e887-1cb2-4b91-8f12-55e01657e02a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.012934 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.012987 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72xr2\" (UniqueName: \"kubernetes.io/projected/c492e887-1cb2-4b91-8f12-55e01657e02a-kube-api-access-72xr2\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.013005 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.013027 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c492e887-1cb2-4b91-8f12-55e01657e02a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.013044 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c492e887-1cb2-4b91-8f12-55e01657e02a-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.590799 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866968f4bc-4s76p" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.604708 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866968f4bc-4s76p" event={"ID":"c492e887-1cb2-4b91-8f12-55e01657e02a","Type":"ContainerDied","Data":"8dee13bbca499368baf62d10827b632488f2372bd734d338e229b35b6eb5b17f"} Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.605215 4705 scope.go:117] "RemoveContainer" containerID="0809ebe6d700dd6b24e257402c90ef30f09cd9268c453a05d5dab36b68a8d7a1" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.647238 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-866968f4bc-4s76p"] Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.662141 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-866968f4bc-4s76p"] Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.879045 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.918113 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.967737 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mgzjj"] Jan 24 08:02:03 crc kubenswrapper[4705]: I0124 08:02:03.968232 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" containerName="dnsmasq-dns" containerID="cri-o://80ef3fb7d00c0c5530b1f57793d2316b7775efcc9fc035062478738f5b76c85a" gracePeriod=10 Jan 24 08:02:04 crc kubenswrapper[4705]: I0124 08:02:04.618188 4705 generic.go:334] "Generic (PLEG): container finished" podID="cc339718-4e46-4f6a-b36b-efded67a561c" containerID="80ef3fb7d00c0c5530b1f57793d2316b7775efcc9fc035062478738f5b76c85a" exitCode=0 Jan 24 08:02:04 crc kubenswrapper[4705]: I0124 08:02:04.618297 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" event={"ID":"cc339718-4e46-4f6a-b36b-efded67a561c","Type":"ContainerDied","Data":"80ef3fb7d00c0c5530b1f57793d2316b7775efcc9fc035062478738f5b76c85a"} Jan 24 08:02:04 crc kubenswrapper[4705]: I0124 08:02:04.771297 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:52478->10.217.0.159:9311: read: connection reset by peer" Jan 24 08:02:04 crc kubenswrapper[4705]: I0124 08:02:04.771942 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:52494->10.217.0.159:9311: read: connection reset by peer" Jan 24 08:02:04 crc kubenswrapper[4705]: I0124 08:02:04.944485 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:02:05 crc kubenswrapper[4705]: I0124 08:02:05.601741 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" path="/var/lib/kubelet/pods/c492e887-1cb2-4b91-8f12-55e01657e02a/volumes" Jan 24 08:02:05 crc kubenswrapper[4705]: I0124 08:02:05.641450 4705 generic.go:334] "Generic (PLEG): container finished" podID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerID="c75b1a3b44af21d27f00f758bbb9d217b7294990931cfe2d18afdeebb36a846e" exitCode=0 Jan 24 08:02:05 crc kubenswrapper[4705]: I0124 08:02:05.641533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b898998b6-6mcbg" event={"ID":"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c","Type":"ContainerDied","Data":"c75b1a3b44af21d27f00f758bbb9d217b7294990931cfe2d18afdeebb36a846e"} Jan 24 08:02:06 crc kubenswrapper[4705]: I0124 08:02:06.673326 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f654f7bb-65t7w" Jan 24 08:02:06 crc kubenswrapper[4705]: I0124 08:02:06.773002 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-68d4559598-l78mv"] Jan 24 08:02:06 crc kubenswrapper[4705]: I0124 08:02:06.773401 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api-log" containerID="cri-o://12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb" gracePeriod=30 Jan 24 08:02:06 crc kubenswrapper[4705]: I0124 08:02:06.774109 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api" containerID="cri-o://7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22" gracePeriod=30 Jan 24 08:02:06 crc kubenswrapper[4705]: I0124 08:02:06.811577 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 24 08:02:06 crc kubenswrapper[4705]: I0124 08:02:06.812144 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 24 08:02:07 crc kubenswrapper[4705]: I0124 08:02:07.286429 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Jan 24 08:02:07 crc kubenswrapper[4705]: I0124 08:02:07.675516 4705 generic.go:334] "Generic (PLEG): container finished" podID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerID="12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb" exitCode=143 Jan 24 08:02:07 crc kubenswrapper[4705]: I0124 08:02:07.675598 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68d4559598-l78mv" event={"ID":"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b","Type":"ContainerDied","Data":"12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb"} Jan 24 08:02:07 crc kubenswrapper[4705]: I0124 08:02:07.680869 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lvscj" event={"ID":"fd93ff70-0f51-4af8-9a10-6407f4901667","Type":"ContainerDied","Data":"e80d6cd79edee3a05bdaf723c476e8d24e9a85f1bd291743e5c0e0e55c7221e3"} Jan 24 08:02:07 crc kubenswrapper[4705]: I0124 08:02:07.680888 4705 generic.go:334] "Generic (PLEG): container finished" podID="fd93ff70-0f51-4af8-9a10-6407f4901667" containerID="e80d6cd79edee3a05bdaf723c476e8d24e9a85f1bd291743e5c0e0e55c7221e3" exitCode=0 Jan 24 08:02:08 crc kubenswrapper[4705]: I0124 08:02:08.790313 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 08:02:08 crc kubenswrapper[4705]: I0124 08:02:08.792753 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 08:02:08 crc kubenswrapper[4705]: I0124 08:02:08.815637 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": dial tcp 10.217.0.159:9311: connect: connection refused" Jan 24 08:02:08 crc kubenswrapper[4705]: I0124 08:02:08.815746 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b898998b6-6mcbg" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": dial tcp 10.217.0.159:9311: connect: connection refused" Jan 24 08:02:08 crc kubenswrapper[4705]: I0124 08:02:08.863446 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 08:02:08 crc kubenswrapper[4705]: I0124 08:02:08.887192 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.577962 4705 scope.go:117] "RemoveContainer" containerID="85fb2113b63a40d45d189c03d5aa6b3c1d07f92cf4ffc596e9bb2300b02fba99" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.680693 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.695529 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lvscj" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.755804 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-config\") pod \"cc339718-4e46-4f6a-b36b-efded67a561c\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.755902 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s2n2\" (UniqueName: \"kubernetes.io/projected/fd93ff70-0f51-4af8-9a10-6407f4901667-kube-api-access-8s2n2\") pod \"fd93ff70-0f51-4af8-9a10-6407f4901667\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.755985 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-config-data\") pod \"fd93ff70-0f51-4af8-9a10-6407f4901667\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756039 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-combined-ca-bundle\") pod \"fd93ff70-0f51-4af8-9a10-6407f4901667\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756092 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-scripts\") pod \"fd93ff70-0f51-4af8-9a10-6407f4901667\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756190 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-db-sync-config-data\") pod \"fd93ff70-0f51-4af8-9a10-6407f4901667\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756250 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-swift-storage-0\") pod \"cc339718-4e46-4f6a-b36b-efded67a561c\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756278 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-svc\") pod \"cc339718-4e46-4f6a-b36b-efded67a561c\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756322 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-nb\") pod \"cc339718-4e46-4f6a-b36b-efded67a561c\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756422 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd93ff70-0f51-4af8-9a10-6407f4901667-etc-machine-id\") pod \"fd93ff70-0f51-4af8-9a10-6407f4901667\" (UID: \"fd93ff70-0f51-4af8-9a10-6407f4901667\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756503 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbggg\" (UniqueName: \"kubernetes.io/projected/cc339718-4e46-4f6a-b36b-efded67a561c-kube-api-access-bbggg\") pod \"cc339718-4e46-4f6a-b36b-efded67a561c\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.756563 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-sb\") pod \"cc339718-4e46-4f6a-b36b-efded67a561c\" (UID: \"cc339718-4e46-4f6a-b36b-efded67a561c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.765864 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd93ff70-0f51-4af8-9a10-6407f4901667-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fd93ff70-0f51-4af8-9a10-6407f4901667" (UID: "fd93ff70-0f51-4af8-9a10-6407f4901667"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.774754 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fd93ff70-0f51-4af8-9a10-6407f4901667" (UID: "fd93ff70-0f51-4af8-9a10-6407f4901667"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.792584 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc339718-4e46-4f6a-b36b-efded67a561c-kube-api-access-bbggg" (OuterVolumeSpecName: "kube-api-access-bbggg") pod "cc339718-4e46-4f6a-b36b-efded67a561c" (UID: "cc339718-4e46-4f6a-b36b-efded67a561c"). InnerVolumeSpecName "kube-api-access-bbggg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.793164 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd93ff70-0f51-4af8-9a10-6407f4901667-kube-api-access-8s2n2" (OuterVolumeSpecName: "kube-api-access-8s2n2") pod "fd93ff70-0f51-4af8-9a10-6407f4901667" (UID: "fd93ff70-0f51-4af8-9a10-6407f4901667"). InnerVolumeSpecName "kube-api-access-8s2n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.797928 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.797942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mgzjj" event={"ID":"cc339718-4e46-4f6a-b36b-efded67a561c","Type":"ContainerDied","Data":"f45e6e1731740cf3f068b151d886601e55c6a940c407caf0e8dce35141083aad"} Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.798002 4705 scope.go:117] "RemoveContainer" containerID="80ef3fb7d00c0c5530b1f57793d2316b7775efcc9fc035062478738f5b76c85a" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.810398 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.810736 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-scripts" (OuterVolumeSpecName: "scripts") pod "fd93ff70-0f51-4af8-9a10-6407f4901667" (UID: "fd93ff70-0f51-4af8-9a10-6407f4901667"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.821715 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lvscj" event={"ID":"fd93ff70-0f51-4af8-9a10-6407f4901667","Type":"ContainerDied","Data":"1c82d2210eacf7c42a27c9ae5482fe3de9a1b8b6f7c5e600d53f8afa472388a4"} Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.821781 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c82d2210eacf7c42a27c9ae5482fe3de9a1b8b6f7c5e600d53f8afa472388a4" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.821914 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lvscj" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.849267 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b898998b6-6mcbg" event={"ID":"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c","Type":"ContainerDied","Data":"63d57089549cd123dd8dc839d2fb6de34c4902814e6f210a525b134f8e4f448a"} Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.849392 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b898998b6-6mcbg" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.859038 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbggg\" (UniqueName: \"kubernetes.io/projected/cc339718-4e46-4f6a-b36b-efded67a561c-kube-api-access-bbggg\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.859074 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s2n2\" (UniqueName: \"kubernetes.io/projected/fd93ff70-0f51-4af8-9a10-6407f4901667-kube-api-access-8s2n2\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.859084 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.859092 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.859102 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd93ff70-0f51-4af8-9a10-6407f4901667-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.871850 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.872215 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.879062 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd93ff70-0f51-4af8-9a10-6407f4901667" (UID: "fd93ff70-0f51-4af8-9a10-6407f4901667"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.887799 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc339718-4e46-4f6a-b36b-efded67a561c" (UID: "cc339718-4e46-4f6a-b36b-efded67a561c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.897503 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-config" (OuterVolumeSpecName: "config") pod "cc339718-4e46-4f6a-b36b-efded67a561c" (UID: "cc339718-4e46-4f6a-b36b-efded67a561c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.904551 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc339718-4e46-4f6a-b36b-efded67a561c" (UID: "cc339718-4e46-4f6a-b36b-efded67a561c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.909970 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-config-data" (OuterVolumeSpecName: "config-data") pod "fd93ff70-0f51-4af8-9a10-6407f4901667" (UID: "fd93ff70-0f51-4af8-9a10-6407f4901667"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.926706 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc339718-4e46-4f6a-b36b-efded67a561c" (UID: "cc339718-4e46-4f6a-b36b-efded67a561c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.938554 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc339718-4e46-4f6a-b36b-efded67a561c" (UID: "cc339718-4e46-4f6a-b36b-efded67a561c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.960307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-combined-ca-bundle\") pod \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.960423 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpg9g\" (UniqueName: \"kubernetes.io/projected/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-kube-api-access-dpg9g\") pod \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.960448 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data-custom\") pod \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.960486 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data\") pod \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.960594 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-logs\") pod \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\" (UID: \"5a6b0e2f-f7f6-4cec-abb1-fea5890f511c\") " Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.961029 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.961043 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.961054 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.961064 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd93ff70-0f51-4af8-9a10-6407f4901667-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.961077 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.961086 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.961094 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc339718-4e46-4f6a-b36b-efded67a561c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.964130 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-logs" (OuterVolumeSpecName: "logs") pod "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" (UID: "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.967238 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-kube-api-access-dpg9g" (OuterVolumeSpecName: "kube-api-access-dpg9g") pod "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" (UID: "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c"). InnerVolumeSpecName "kube-api-access-dpg9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.967279 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" (UID: "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:09 crc kubenswrapper[4705]: I0124 08:02:09.999618 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" (UID: "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.033546 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data" (OuterVolumeSpecName: "config-data") pod "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" (UID: "5a6b0e2f-f7f6-4cec-abb1-fea5890f511c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.067242 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.067991 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpg9g\" (UniqueName: \"kubernetes.io/projected/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-kube-api-access-dpg9g\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.068112 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.068204 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.068316 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.111013 4705 scope.go:117] "RemoveContainer" containerID="526ec6189612107234019a3b343ed79f0dc242c063c39f7af15c5802bd7da53d" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.149954 4705 scope.go:117] "RemoveContainer" containerID="c75b1a3b44af21d27f00f758bbb9d217b7294990931cfe2d18afdeebb36a846e" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.166726 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mgzjj"] Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.178017 4705 scope.go:117] "RemoveContainer" containerID="84c92ee74874ce67e881123994ac0468aaeeca92bf89b8294cfc5fb9ce961c28" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.186153 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mgzjj"] Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.225287 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b898998b6-6mcbg"] Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.235017 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b898998b6-6mcbg"] Jan 24 08:02:10 crc kubenswrapper[4705]: E0124 08:02:10.251503 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.891675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerStarted","Data":"c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae"} Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.892266 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="ceilometer-notification-agent" containerID="cri-o://29b1229f77bf2b10062994cd722d4cec0626c16f125b04fe450268fefe2563f9" gracePeriod=30 Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.892439 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="proxy-httpd" containerID="cri-o://c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae" gracePeriod=30 Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.892509 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="sg-core" containerID="cri-o://3102b2fef3f57292cb5b6b9258bc414e29b75b66237f8a94e7a97fd6f4617f74" gracePeriod=30 Jan 24 08:02:10 crc kubenswrapper[4705]: I0124 08:02:10.892920 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.191047 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201712 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" containerName="dnsmasq-dns" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201745 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" containerName="dnsmasq-dns" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201762 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerName="dnsmasq-dns" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201769 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerName="dnsmasq-dns" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201783 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd93ff70-0f51-4af8-9a10-6407f4901667" containerName="cinder-db-sync" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201790 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd93ff70-0f51-4af8-9a10-6407f4901667" containerName="cinder-db-sync" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201802 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201808 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201834 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerName="init" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201841 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerName="init" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201849 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201855 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201866 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66c9c6e-cac9-4e99-b4d7-532f87f30ada" containerName="heat-db-sync" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201874 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66c9c6e-cac9-4e99-b4d7-532f87f30ada" containerName="heat-db-sync" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201889 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201897 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201910 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker-log" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201916 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker-log" Jan 24 08:02:11 crc kubenswrapper[4705]: E0124 08:02:11.201928 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" containerName="init" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.201934 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" containerName="init" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202147 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" containerName="dnsmasq-dns" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202156 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66c9c6e-cac9-4e99-b4d7-532f87f30ada" containerName="heat-db-sync" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202166 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee7d188-54dd-45e5-95f9-75b0bcef52d3" containerName="dnsmasq-dns" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202182 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202193 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api-log" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202202 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd93ff70-0f51-4af8-9a10-6407f4901667" containerName="cinder-db-sync" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202333 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" containerName="barbican-api" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.202344 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492e887-1cb2-4b91-8f12-55e01657e02a" containerName="barbican-worker-log" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.203447 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.211198 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xc858" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.211298 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.211206 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.211416 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.251083 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.304088 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.305087 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxz49\" (UniqueName: \"kubernetes.io/projected/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-kube-api-access-mxz49\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.305260 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.305395 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.305594 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-scripts\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.305736 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.323656 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gsn9w"] Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.326890 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.368395 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gsn9w"] Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410622 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxz49\" (UniqueName: \"kubernetes.io/projected/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-kube-api-access-mxz49\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410690 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410724 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-scripts\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410785 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-config\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410836 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29hmr\" (UniqueName: \"kubernetes.io/projected/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-kube-api-access-29hmr\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.410948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.415470 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.426527 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-scripts\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.427175 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.427958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.458815 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.470798 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxz49\" (UniqueName: \"kubernetes.io/projected/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-kube-api-access-mxz49\") pod \"cinder-scheduler-0\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.512305 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-config\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.512583 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29hmr\" (UniqueName: \"kubernetes.io/projected/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-kube-api-access-29hmr\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.512666 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.512798 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.512900 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.513040 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.513981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.514565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-config\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.515509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.516891 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.517499 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.548411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.569900 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29hmr\" (UniqueName: \"kubernetes.io/projected/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-kube-api-access-29hmr\") pod \"dnsmasq-dns-6578955fd5-gsn9w\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.575960 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.580610 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.592015 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.604088 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6b0e2f-f7f6-4cec-abb1-fea5890f511c" path="/var/lib/kubelet/pods/5a6b0e2f-f7f6-4cec-abb1-fea5890f511c/volumes" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.605091 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc339718-4e46-4f6a-b36b-efded67a561c" path="/var/lib/kubelet/pods/cc339718-4e46-4f6a-b36b-efded67a561c/volumes" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.619890 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.671360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.717073 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-scripts\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.717198 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4bbbf7-117f-4759-86b7-d5df766762e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.717247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f4bbbf7-117f-4759-86b7-d5df766762e3-logs\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.717334 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.717361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.717383 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfw6p\" (UniqueName: \"kubernetes.io/projected/1f4bbbf7-117f-4759-86b7-d5df766762e3-kube-api-access-hfw6p\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.718287 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.820264 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.820326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.820353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfw6p\" (UniqueName: \"kubernetes.io/projected/1f4bbbf7-117f-4759-86b7-d5df766762e3-kube-api-access-hfw6p\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.820439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.820475 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-scripts\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.820534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4bbbf7-117f-4759-86b7-d5df766762e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.820584 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f4bbbf7-117f-4759-86b7-d5df766762e3-logs\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.821057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4bbbf7-117f-4759-86b7-d5df766762e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.821176 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f4bbbf7-117f-4759-86b7-d5df766762e3-logs\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.828105 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.829893 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-scripts\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.833329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.843746 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfw6p\" (UniqueName: \"kubernetes.io/projected/1f4bbbf7-117f-4759-86b7-d5df766762e3-kube-api-access-hfw6p\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.850375 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.915023 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.929977 4705 generic.go:334] "Generic (PLEG): container finished" podID="787ad3bd-2593-42a7-b368-70abddcd74da" containerID="c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae" exitCode=0 Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.930023 4705 generic.go:334] "Generic (PLEG): container finished" podID="787ad3bd-2593-42a7-b368-70abddcd74da" containerID="3102b2fef3f57292cb5b6b9258bc414e29b75b66237f8a94e7a97fd6f4617f74" exitCode=2 Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.930052 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerDied","Data":"c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae"} Jan 24 08:02:11 crc kubenswrapper[4705]: I0124 08:02:11.930097 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerDied","Data":"3102b2fef3f57292cb5b6b9258bc414e29b75b66237f8a94e7a97fd6f4617f74"} Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.021275 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.021315 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.169577 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.239758 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:38250->10.217.0.163:9311: read: connection reset by peer" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.239816 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68d4559598-l78mv" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:38234->10.217.0.163:9311: read: connection reset by peer" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.351137 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gsn9w"] Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.521583 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.751231 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.771526 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.779363 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.867514 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data-custom\") pod \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.867586 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-logs\") pod \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.867732 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-combined-ca-bundle\") pod \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.867797 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data\") pod \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.867861 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68vql\" (UniqueName: \"kubernetes.io/projected/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-kube-api-access-68vql\") pod \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\" (UID: \"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b\") " Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.869222 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-logs" (OuterVolumeSpecName: "logs") pod "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" (UID: "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.878634 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-kube-api-access-68vql" (OuterVolumeSpecName: "kube-api-access-68vql") pod "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" (UID: "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b"). InnerVolumeSpecName "kube-api-access-68vql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.892045 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" (UID: "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.926198 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" (UID: "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.958944 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data" (OuterVolumeSpecName: "config-data") pod "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" (UID: "0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.961629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f4bbbf7-117f-4759-86b7-d5df766762e3","Type":"ContainerStarted","Data":"8e61805ae202211afdf693c6df27eed4b1af6d3ec163a467b8b21095d9ef4a6e"} Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.963945 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0dba7b53-b7e7-430c-bab3-5d5075b16fa3","Type":"ContainerStarted","Data":"09dc86c9aae395c6dc57e2b8e959382c214f67daef53ec7442b723bbfc9cef49"} Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.969577 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerID="171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf" exitCode=0 Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.969678 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" event={"ID":"0ebc3649-e2d4-4769-9a0e-4ba52f06f077","Type":"ContainerDied","Data":"171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf"} Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.969710 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" event={"ID":"0ebc3649-e2d4-4769-9a0e-4ba52f06f077","Type":"ContainerStarted","Data":"66c127678c34410bb8fe70b868531bd419e02bb183158a118e3a52210d0c6d46"} Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.970895 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.970923 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.970934 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68vql\" (UniqueName: \"kubernetes.io/projected/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-kube-api-access-68vql\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.970947 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.970958 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.979849 4705 generic.go:334] "Generic (PLEG): container finished" podID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerID="7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22" exitCode=0 Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.980815 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68d4559598-l78mv" event={"ID":"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b","Type":"ContainerDied","Data":"7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22"} Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.980923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68d4559598-l78mv" event={"ID":"0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b","Type":"ContainerDied","Data":"69c5e8d5ea3aeeecfdf3393719935a80c81fd76bc1281982e7ee87c02110434b"} Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.980984 4705 scope.go:117] "RemoveContainer" containerID="7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22" Jan 24 08:02:12 crc kubenswrapper[4705]: I0124 08:02:12.981349 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68d4559598-l78mv" Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.073346 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-68d4559598-l78mv"] Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.090325 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-68d4559598-l78mv"] Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.102523 4705 scope.go:117] "RemoveContainer" containerID="12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb" Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.217853 4705 scope.go:117] "RemoveContainer" containerID="7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22" Jan 24 08:02:13 crc kubenswrapper[4705]: E0124 08:02:13.224013 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22\": container with ID starting with 7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22 not found: ID does not exist" containerID="7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22" Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.224065 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22"} err="failed to get container status \"7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22\": rpc error: code = NotFound desc = could not find container \"7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22\": container with ID starting with 7d4a2c00e876484e1333cbc464054980417037f34b1178158b58c0641b887b22 not found: ID does not exist" Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.224101 4705 scope.go:117] "RemoveContainer" containerID="12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb" Jan 24 08:02:13 crc kubenswrapper[4705]: E0124 08:02:13.224625 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb\": container with ID starting with 12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb not found: ID does not exist" containerID="12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb" Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.224655 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb"} err="failed to get container status \"12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb\": rpc error: code = NotFound desc = could not find container \"12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb\": container with ID starting with 12bccab06aca98ad37c2d5878cc9f7ef18659a62457d4b892c8c832dd95caefb not found: ID does not exist" Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.634783 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" path="/var/lib/kubelet/pods/0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b/volumes" Jan 24 08:02:13 crc kubenswrapper[4705]: I0124 08:02:13.783965 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.000555 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" event={"ID":"0ebc3649-e2d4-4769-9a0e-4ba52f06f077","Type":"ContainerStarted","Data":"d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567"} Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.001248 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.010016 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f4bbbf7-117f-4759-86b7-d5df766762e3","Type":"ContainerStarted","Data":"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83"} Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.029334 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" podStartSLOduration=3.029301518 podStartE2EDuration="3.029301518s" podCreationTimestamp="2026-01-24 08:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:14.021635644 +0000 UTC m=+1272.741508932" watchObservedRunningTime="2026-01-24 08:02:14.029301518 +0000 UTC m=+1272.749174806" Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.658804 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.952782 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9b94fb6bf-sxxmr"] Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.953126 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9b94fb6bf-sxxmr" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-api" containerID="cri-o://0ba3fb74191edff069c8e1acc25b146de11147efac97ca219f87b27119c5b1f8" gracePeriod=30 Jan 24 08:02:14 crc kubenswrapper[4705]: I0124 08:02:14.953309 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9b94fb6bf-sxxmr" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-httpd" containerID="cri-o://a3098359904e16f8784ce49d0d993a5c7f30ea1518d1ed57f78d203e2b46e56f" gracePeriod=30 Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.003445 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-9b94fb6bf-sxxmr" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": EOF" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.015515 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bcd878cb5-xnt7l"] Jan 24 08:02:15 crc kubenswrapper[4705]: E0124 08:02:15.015939 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.015960 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api" Jan 24 08:02:15 crc kubenswrapper[4705]: E0124 08:02:15.015979 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api-log" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.015987 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api-log" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.016159 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.016187 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eae1a8a-9c13-483d-a3d9-0cb2f0f76c3b" containerName="barbican-api-log" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.017138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.025723 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0dba7b53-b7e7-430c-bab3-5d5075b16fa3","Type":"ContainerStarted","Data":"967d1f93ca97dcec847f9e488ccdb8e4e79a97f2592827c2c7e38e20a8072c8c"} Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.025784 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0dba7b53-b7e7-430c-bab3-5d5075b16fa3","Type":"ContainerStarted","Data":"86bac3e0503430e17af877bf52998bf106f4485fb2afd86d46ab5c9cd8422b0c"} Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.029080 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api-log" containerID="cri-o://957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83" gracePeriod=30 Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.029476 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api" containerID="cri-o://eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9" gracePeriod=30 Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.029915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f4bbbf7-117f-4759-86b7-d5df766762e3","Type":"ContainerStarted","Data":"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9"} Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.029959 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.113557 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bcd878cb5-xnt7l"] Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.136836 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.136796887 podStartE2EDuration="4.136796887s" podCreationTimestamp="2026-01-24 08:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:15.119474913 +0000 UTC m=+1273.839348201" watchObservedRunningTime="2026-01-24 08:02:15.136796887 +0000 UTC m=+1273.856670175" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.144812 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-httpd-config\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.144976 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-combined-ca-bundle\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.145025 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-config\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.145081 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-internal-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.145181 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-ovndb-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.145215 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz75r\" (UniqueName: \"kubernetes.io/projected/def20def-8ec8-4bb9-9c58-c557b1610ae9-kube-api-access-nz75r\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.145257 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-public-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.186435 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.217793935 podStartE2EDuration="4.186395813s" podCreationTimestamp="2026-01-24 08:02:11 +0000 UTC" firstStartedPulling="2026-01-24 08:02:12.192829055 +0000 UTC m=+1270.912702343" lastFinishedPulling="2026-01-24 08:02:13.161430933 +0000 UTC m=+1271.881304221" observedRunningTime="2026-01-24 08:02:15.149928174 +0000 UTC m=+1273.869801462" watchObservedRunningTime="2026-01-24 08:02:15.186395813 +0000 UTC m=+1273.906269101" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.248908 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-ovndb-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.249039 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz75r\" (UniqueName: \"kubernetes.io/projected/def20def-8ec8-4bb9-9c58-c557b1610ae9-kube-api-access-nz75r\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.249074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-public-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.249166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-httpd-config\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.249297 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-combined-ca-bundle\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.249343 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-internal-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.249369 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-config\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.264032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-config\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.264063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-internal-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.267462 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-public-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.268294 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-httpd-config\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.282437 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz75r\" (UniqueName: \"kubernetes.io/projected/def20def-8ec8-4bb9-9c58-c557b1610ae9-kube-api-access-nz75r\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.284139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-combined-ca-bundle\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.286993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/def20def-8ec8-4bb9-9c58-c557b1610ae9-ovndb-tls-certs\") pod \"neutron-bcd878cb5-xnt7l\" (UID: \"def20def-8ec8-4bb9-9c58-c557b1610ae9\") " pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:15 crc kubenswrapper[4705]: I0124 08:02:15.341087 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.007260 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.049176 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bcd878cb5-xnt7l"] Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.052281 4705 generic.go:334] "Generic (PLEG): container finished" podID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerID="a3098359904e16f8784ce49d0d993a5c7f30ea1518d1ed57f78d203e2b46e56f" exitCode=0 Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.052374 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b94fb6bf-sxxmr" event={"ID":"688d82f6-748b-42fa-8595-b24a65ba77d3","Type":"ContainerDied","Data":"a3098359904e16f8784ce49d0d993a5c7f30ea1518d1ed57f78d203e2b46e56f"} Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.055724 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerID="eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9" exitCode=0 Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.055764 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerID="957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83" exitCode=143 Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.055891 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f4bbbf7-117f-4759-86b7-d5df766762e3","Type":"ContainerDied","Data":"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9"} Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.055962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f4bbbf7-117f-4759-86b7-d5df766762e3","Type":"ContainerDied","Data":"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83"} Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.055984 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.056022 4705 scope.go:117] "RemoveContainer" containerID="eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.055978 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f4bbbf7-117f-4759-86b7-d5df766762e3","Type":"ContainerDied","Data":"8e61805ae202211afdf693c6df27eed4b1af6d3ec163a467b8b21095d9ef4a6e"} Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.060740 4705 generic.go:334] "Generic (PLEG): container finished" podID="787ad3bd-2593-42a7-b368-70abddcd74da" containerID="29b1229f77bf2b10062994cd722d4cec0626c16f125b04fe450268fefe2563f9" exitCode=0 Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.063060 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerDied","Data":"29b1229f77bf2b10062994cd722d4cec0626c16f125b04fe450268fefe2563f9"} Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.122959 4705 scope.go:117] "RemoveContainer" containerID="957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.137322 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data\") pod \"1f4bbbf7-117f-4759-86b7-d5df766762e3\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.137461 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f4bbbf7-117f-4759-86b7-d5df766762e3-logs\") pod \"1f4bbbf7-117f-4759-86b7-d5df766762e3\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.137544 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfw6p\" (UniqueName: \"kubernetes.io/projected/1f4bbbf7-117f-4759-86b7-d5df766762e3-kube-api-access-hfw6p\") pod \"1f4bbbf7-117f-4759-86b7-d5df766762e3\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.137687 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data-custom\") pod \"1f4bbbf7-117f-4759-86b7-d5df766762e3\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.137814 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-scripts\") pod \"1f4bbbf7-117f-4759-86b7-d5df766762e3\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.137976 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4bbbf7-117f-4759-86b7-d5df766762e3-etc-machine-id\") pod \"1f4bbbf7-117f-4759-86b7-d5df766762e3\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.141517 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-combined-ca-bundle\") pod \"1f4bbbf7-117f-4759-86b7-d5df766762e3\" (UID: \"1f4bbbf7-117f-4759-86b7-d5df766762e3\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.138394 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4bbbf7-117f-4759-86b7-d5df766762e3-logs" (OuterVolumeSpecName: "logs") pod "1f4bbbf7-117f-4759-86b7-d5df766762e3" (UID: "1f4bbbf7-117f-4759-86b7-d5df766762e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.138225 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4bbbf7-117f-4759-86b7-d5df766762e3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f4bbbf7-117f-4759-86b7-d5df766762e3" (UID: "1f4bbbf7-117f-4759-86b7-d5df766762e3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.146939 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f4bbbf7-117f-4759-86b7-d5df766762e3" (UID: "1f4bbbf7-117f-4759-86b7-d5df766762e3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.147173 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4bbbf7-117f-4759-86b7-d5df766762e3-kube-api-access-hfw6p" (OuterVolumeSpecName: "kube-api-access-hfw6p") pod "1f4bbbf7-117f-4759-86b7-d5df766762e3" (UID: "1f4bbbf7-117f-4759-86b7-d5df766762e3"). InnerVolumeSpecName "kube-api-access-hfw6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.156233 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-scripts" (OuterVolumeSpecName: "scripts") pod "1f4bbbf7-117f-4759-86b7-d5df766762e3" (UID: "1f4bbbf7-117f-4759-86b7-d5df766762e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.166628 4705 scope.go:117] "RemoveContainer" containerID="eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9" Jan 24 08:02:16 crc kubenswrapper[4705]: E0124 08:02:16.167312 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9\": container with ID starting with eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9 not found: ID does not exist" containerID="eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.167361 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9"} err="failed to get container status \"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9\": rpc error: code = NotFound desc = could not find container \"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9\": container with ID starting with eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9 not found: ID does not exist" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.167395 4705 scope.go:117] "RemoveContainer" containerID="957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83" Jan 24 08:02:16 crc kubenswrapper[4705]: E0124 08:02:16.168081 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83\": container with ID starting with 957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83 not found: ID does not exist" containerID="957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.168114 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83"} err="failed to get container status \"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83\": rpc error: code = NotFound desc = could not find container \"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83\": container with ID starting with 957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83 not found: ID does not exist" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.168135 4705 scope.go:117] "RemoveContainer" containerID="eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.172525 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9"} err="failed to get container status \"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9\": rpc error: code = NotFound desc = could not find container \"eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9\": container with ID starting with eb5d85fda1701edc6493308dade04af37b53e8bd9daf4a8fd893ecaff6b1ecd9 not found: ID does not exist" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.172615 4705 scope.go:117] "RemoveContainer" containerID="957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.173109 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83"} err="failed to get container status \"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83\": rpc error: code = NotFound desc = could not find container \"957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83\": container with ID starting with 957e2163b960a1ec03c15c609dc4ff4f6423eed5a117d62866ca87a51646ea83 not found: ID does not exist" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.194118 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f4bbbf7-117f-4759-86b7-d5df766762e3" (UID: "1f4bbbf7-117f-4759-86b7-d5df766762e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.225147 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data" (OuterVolumeSpecName: "config-data") pod "1f4bbbf7-117f-4759-86b7-d5df766762e3" (UID: "1f4bbbf7-117f-4759-86b7-d5df766762e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.244845 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.244881 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f4bbbf7-117f-4759-86b7-d5df766762e3-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.244898 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfw6p\" (UniqueName: \"kubernetes.io/projected/1f4bbbf7-117f-4759-86b7-d5df766762e3-kube-api-access-hfw6p\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.244913 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.244925 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.244936 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f4bbbf7-117f-4759-86b7-d5df766762e3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.244949 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4bbbf7-117f-4759-86b7-d5df766762e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.428201 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.447962 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.468962 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:16 crc kubenswrapper[4705]: E0124 08:02:16.469674 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api-log" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.469699 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api-log" Jan 24 08:02:16 crc kubenswrapper[4705]: E0124 08:02:16.469730 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.469738 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.470492 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.470531 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" containerName="cinder-api-log" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.471754 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.476591 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.483318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.484856 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.485090 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.550084 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-logs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652058 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-config-data\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652173 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652272 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqnz\" (UniqueName: \"kubernetes.io/projected/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-kube-api-access-bmqnz\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652372 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.652400 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-scripts\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.671640 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.755080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.756374 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-scripts\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.756899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-logs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.757122 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.757473 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-config-data\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.757692 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.758144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.758306 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmqnz\" (UniqueName: \"kubernetes.io/projected/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-kube-api-access-bmqnz\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.758338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.758726 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-logs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.759226 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.764292 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-scripts\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.765715 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-config-data\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.766854 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.769224 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.769336 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.769345 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.780157 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmqnz\" (UniqueName: \"kubernetes.io/projected/6b3b0e00-82d8-4096-80b1-a9edffb3cdaf-kube-api-access-bmqnz\") pod \"cinder-api-0\" (UID: \"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf\") " pod="openstack/cinder-api-0" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.859321 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-log-httpd\") pod \"787ad3bd-2593-42a7-b368-70abddcd74da\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.859410 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-scripts\") pod \"787ad3bd-2593-42a7-b368-70abddcd74da\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.859440 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-config-data\") pod \"787ad3bd-2593-42a7-b368-70abddcd74da\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.859524 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnd89\" (UniqueName: \"kubernetes.io/projected/787ad3bd-2593-42a7-b368-70abddcd74da-kube-api-access-rnd89\") pod \"787ad3bd-2593-42a7-b368-70abddcd74da\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.859550 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-sg-core-conf-yaml\") pod \"787ad3bd-2593-42a7-b368-70abddcd74da\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.859609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-combined-ca-bundle\") pod \"787ad3bd-2593-42a7-b368-70abddcd74da\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.859654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-run-httpd\") pod \"787ad3bd-2593-42a7-b368-70abddcd74da\" (UID: \"787ad3bd-2593-42a7-b368-70abddcd74da\") " Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.860521 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "787ad3bd-2593-42a7-b368-70abddcd74da" (UID: "787ad3bd-2593-42a7-b368-70abddcd74da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.860991 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "787ad3bd-2593-42a7-b368-70abddcd74da" (UID: "787ad3bd-2593-42a7-b368-70abddcd74da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.866526 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787ad3bd-2593-42a7-b368-70abddcd74da-kube-api-access-rnd89" (OuterVolumeSpecName: "kube-api-access-rnd89") pod "787ad3bd-2593-42a7-b368-70abddcd74da" (UID: "787ad3bd-2593-42a7-b368-70abddcd74da"). InnerVolumeSpecName "kube-api-access-rnd89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.866671 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-scripts" (OuterVolumeSpecName: "scripts") pod "787ad3bd-2593-42a7-b368-70abddcd74da" (UID: "787ad3bd-2593-42a7-b368-70abddcd74da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.910496 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "787ad3bd-2593-42a7-b368-70abddcd74da" (UID: "787ad3bd-2593-42a7-b368-70abddcd74da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.928083 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "787ad3bd-2593-42a7-b368-70abddcd74da" (UID: "787ad3bd-2593-42a7-b368-70abddcd74da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.963456 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnd89\" (UniqueName: \"kubernetes.io/projected/787ad3bd-2593-42a7-b368-70abddcd74da-kube-api-access-rnd89\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.963972 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.964318 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.964413 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.964498 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/787ad3bd-2593-42a7-b368-70abddcd74da-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.964617 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.964733 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-config-data" (OuterVolumeSpecName: "config-data") pod "787ad3bd-2593-42a7-b368-70abddcd74da" (UID: "787ad3bd-2593-42a7-b368-70abddcd74da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:16 crc kubenswrapper[4705]: I0124 08:02:16.966547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.067119 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787ad3bd-2593-42a7-b368-70abddcd74da-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.104479 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcd878cb5-xnt7l" event={"ID":"def20def-8ec8-4bb9-9c58-c557b1610ae9","Type":"ContainerStarted","Data":"e6fe4cb2233075063e00b7c4ef9d4a07ddd447a11db9ab933b943540c4551cce"} Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.104564 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcd878cb5-xnt7l" event={"ID":"def20def-8ec8-4bb9-9c58-c557b1610ae9","Type":"ContainerStarted","Data":"c9553d39761390d7e6d6c534343e74d6beccf0c7aa51f85de36f63ef9b264867"} Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.104583 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcd878cb5-xnt7l" event={"ID":"def20def-8ec8-4bb9-9c58-c557b1610ae9","Type":"ContainerStarted","Data":"1a5796087b4ff6bb9b151881f89aad6d860c5289538c2bf8abd2bd2935cca6c6"} Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.104659 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.115132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"787ad3bd-2593-42a7-b368-70abddcd74da","Type":"ContainerDied","Data":"81a4cb47f07ac46e19d2227548b6b6ca251e9d2e38d55da3a6d08b7c54e6b110"} Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.115190 4705 scope.go:117] "RemoveContainer" containerID="c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.115302 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.161804 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bcd878cb5-xnt7l" podStartSLOduration=3.161772676 podStartE2EDuration="3.161772676s" podCreationTimestamp="2026-01-24 08:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:17.130348108 +0000 UTC m=+1275.850221416" watchObservedRunningTime="2026-01-24 08:02:17.161772676 +0000 UTC m=+1275.881645964" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.211986 4705 scope.go:117] "RemoveContainer" containerID="3102b2fef3f57292cb5b6b9258bc414e29b75b66237f8a94e7a97fd6f4617f74" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.250589 4705 scope.go:117] "RemoveContainer" containerID="29b1229f77bf2b10062994cd722d4cec0626c16f125b04fe450268fefe2563f9" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.323285 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.376749 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.403658 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:17 crc kubenswrapper[4705]: E0124 08:02:17.404531 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="sg-core" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.405726 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="sg-core" Jan 24 08:02:17 crc kubenswrapper[4705]: E0124 08:02:17.405876 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="proxy-httpd" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.405956 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="proxy-httpd" Jan 24 08:02:17 crc kubenswrapper[4705]: E0124 08:02:17.406041 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="ceilometer-notification-agent" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.406111 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="ceilometer-notification-agent" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.406505 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="proxy-httpd" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.406641 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="sg-core" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.406750 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" containerName="ceilometer-notification-agent" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.412955 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.413269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.421527 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.421951 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.427408 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.485940 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.486010 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-config-data\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.486052 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-log-httpd\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.486082 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-scripts\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.486106 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxl5l\" (UniqueName: \"kubernetes.io/projected/e15332d4-f888-4bee-94b6-2ea12e682089-kube-api-access-kxl5l\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.486126 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-run-httpd\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.486152 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.587260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.587673 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-config-data\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.587921 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-log-httpd\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.588048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-scripts\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.588155 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxl5l\" (UniqueName: \"kubernetes.io/projected/e15332d4-f888-4bee-94b6-2ea12e682089-kube-api-access-kxl5l\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.588302 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-run-httpd\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.588440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.588614 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-log-httpd\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.589264 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-run-httpd\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.596026 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-scripts\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.597604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-config-data\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.598390 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.601065 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4bbbf7-117f-4759-86b7-d5df766762e3" path="/var/lib/kubelet/pods/1f4bbbf7-117f-4759-86b7-d5df766762e3/volumes" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.602284 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787ad3bd-2593-42a7-b368-70abddcd74da" path="/var/lib/kubelet/pods/787ad3bd-2593-42a7-b368-70abddcd74da/volumes" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.607170 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.611894 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxl5l\" (UniqueName: \"kubernetes.io/projected/e15332d4-f888-4bee-94b6-2ea12e682089-kube-api-access-kxl5l\") pod \"ceilometer-0\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " pod="openstack/ceilometer-0" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.744719 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-9b94fb6bf-sxxmr" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": dial tcp 10.217.0.153:9696: connect: connection refused" Jan 24 08:02:17 crc kubenswrapper[4705]: I0124 08:02:17.852132 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:18 crc kubenswrapper[4705]: I0124 08:02:18.150910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf","Type":"ContainerStarted","Data":"8253df5483e93ef964a021c38b416d65a1b5302d1040cc27703b68da0b13c9b8"} Jan 24 08:02:18 crc kubenswrapper[4705]: I0124 08:02:18.151640 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf","Type":"ContainerStarted","Data":"2898f6277f8a5a3448bf8115d85603f1aa1ac65004710b6bcca90ca0e2fdd3b2"} Jan 24 08:02:18 crc kubenswrapper[4705]: I0124 08:02:18.350137 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:19 crc kubenswrapper[4705]: I0124 08:02:19.167277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b3b0e00-82d8-4096-80b1-a9edffb3cdaf","Type":"ContainerStarted","Data":"7836415d0df8a4c1504c5d1877e10954fecb00dec33acfcec625746fd7845d69"} Jan 24 08:02:19 crc kubenswrapper[4705]: I0124 08:02:19.167911 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 08:02:19 crc kubenswrapper[4705]: I0124 08:02:19.170648 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerStarted","Data":"7107a4f05d90ca40eea05d689e19558530577d0bfe309d1b11e511c6cbad041c"} Jan 24 08:02:19 crc kubenswrapper[4705]: I0124 08:02:19.170694 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerStarted","Data":"9afee2ec46039b5ea076e29bdb3eaccadff6c1cde76d854aeb36fd0898a1f16b"} Jan 24 08:02:19 crc kubenswrapper[4705]: I0124 08:02:19.219916 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:02:19 crc kubenswrapper[4705]: I0124 08:02:19.245317 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.245295271 podStartE2EDuration="3.245295271s" podCreationTimestamp="2026-01-24 08:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:19.20586399 +0000 UTC m=+1277.925737278" watchObservedRunningTime="2026-01-24 08:02:19.245295271 +0000 UTC m=+1277.965168579" Jan 24 08:02:19 crc kubenswrapper[4705]: I0124 08:02:19.250184 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55877bd6d-swpx2" Jan 24 08:02:20 crc kubenswrapper[4705]: I0124 08:02:20.195032 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerStarted","Data":"633da75b7c84b689580791d007b55b4b2215b1d3a4e409488d0ea65ab740f5c3"} Jan 24 08:02:21 crc kubenswrapper[4705]: I0124 08:02:21.206389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerStarted","Data":"30819fe41c03ad3802061fd6d73a2f05ad94e2659ddeb188209f39b70767f2e8"} Jan 24 08:02:21 crc kubenswrapper[4705]: I0124 08:02:21.673025 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:21 crc kubenswrapper[4705]: I0124 08:02:21.735657 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-nkj6g"] Jan 24 08:02:21 crc kubenswrapper[4705]: I0124 08:02:21.735935 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" podUID="eb0f915f-127e-4dc3-8550-c75361485387" containerName="dnsmasq-dns" containerID="cri-o://7c3e84abfad6929b35d2ca75ae93c70e05615a30d6e9d7c234da8fcc4d342e6e" gracePeriod=10 Jan 24 08:02:21 crc kubenswrapper[4705]: I0124 08:02:21.908336 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 08:02:21 crc kubenswrapper[4705]: I0124 08:02:21.956490 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.231482 4705 generic.go:334] "Generic (PLEG): container finished" podID="eb0f915f-127e-4dc3-8550-c75361485387" containerID="7c3e84abfad6929b35d2ca75ae93c70e05615a30d6e9d7c234da8fcc4d342e6e" exitCode=0 Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.231550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" event={"ID":"eb0f915f-127e-4dc3-8550-c75361485387","Type":"ContainerDied","Data":"7c3e84abfad6929b35d2ca75ae93c70e05615a30d6e9d7c234da8fcc4d342e6e"} Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.231737 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="cinder-scheduler" containerID="cri-o://86bac3e0503430e17af877bf52998bf106f4485fb2afd86d46ab5c9cd8422b0c" gracePeriod=30 Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.232419 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="probe" containerID="cri-o://967d1f93ca97dcec847f9e488ccdb8e4e79a97f2592827c2c7e38e20a8072c8c" gracePeriod=30 Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.646576 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.824369 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-sb\") pod \"eb0f915f-127e-4dc3-8550-c75361485387\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.825038 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-swift-storage-0\") pod \"eb0f915f-127e-4dc3-8550-c75361485387\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.825310 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dcv9\" (UniqueName: \"kubernetes.io/projected/eb0f915f-127e-4dc3-8550-c75361485387-kube-api-access-2dcv9\") pod \"eb0f915f-127e-4dc3-8550-c75361485387\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.825366 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-svc\") pod \"eb0f915f-127e-4dc3-8550-c75361485387\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.825439 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-config\") pod \"eb0f915f-127e-4dc3-8550-c75361485387\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.825479 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-nb\") pod \"eb0f915f-127e-4dc3-8550-c75361485387\" (UID: \"eb0f915f-127e-4dc3-8550-c75361485387\") " Jan 24 08:02:22 crc kubenswrapper[4705]: I0124 08:02:22.835176 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0f915f-127e-4dc3-8550-c75361485387-kube-api-access-2dcv9" (OuterVolumeSpecName: "kube-api-access-2dcv9") pod "eb0f915f-127e-4dc3-8550-c75361485387" (UID: "eb0f915f-127e-4dc3-8550-c75361485387"). InnerVolumeSpecName "kube-api-access-2dcv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.027652 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dcv9\" (UniqueName: \"kubernetes.io/projected/eb0f915f-127e-4dc3-8550-c75361485387-kube-api-access-2dcv9\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.072294 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb0f915f-127e-4dc3-8550-c75361485387" (UID: "eb0f915f-127e-4dc3-8550-c75361485387"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.100688 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-config" (OuterVolumeSpecName: "config") pod "eb0f915f-127e-4dc3-8550-c75361485387" (UID: "eb0f915f-127e-4dc3-8550-c75361485387"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.109530 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb0f915f-127e-4dc3-8550-c75361485387" (UID: "eb0f915f-127e-4dc3-8550-c75361485387"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.131471 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb0f915f-127e-4dc3-8550-c75361485387" (UID: "eb0f915f-127e-4dc3-8550-c75361485387"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.138028 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.138069 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.138083 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.138108 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.140489 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb0f915f-127e-4dc3-8550-c75361485387" (UID: "eb0f915f-127e-4dc3-8550-c75361485387"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.227605 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5c7b7bd5d5-rdcz5" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.241271 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f915f-127e-4dc3-8550-c75361485387-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.286764 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" event={"ID":"eb0f915f-127e-4dc3-8550-c75361485387","Type":"ContainerDied","Data":"704a14648cbdc27939c72ccf1ef222c0177f56b82911053838115f05d9b0658b"} Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.287164 4705 scope.go:117] "RemoveContainer" containerID="7c3e84abfad6929b35d2ca75ae93c70e05615a30d6e9d7c234da8fcc4d342e6e" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.287360 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-nkj6g" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.298387 4705 generic.go:334] "Generic (PLEG): container finished" podID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerID="0ba3fb74191edff069c8e1acc25b146de11147efac97ca219f87b27119c5b1f8" exitCode=0 Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.298441 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b94fb6bf-sxxmr" event={"ID":"688d82f6-748b-42fa-8595-b24a65ba77d3","Type":"ContainerDied","Data":"0ba3fb74191edff069c8e1acc25b146de11147efac97ca219f87b27119c5b1f8"} Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.419093 4705 scope.go:117] "RemoveContainer" containerID="5c534a84563afb2f34607f22b91fbd671924bb9f51967452c5d733fae08a3c68" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.463962 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-nkj6g"] Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.474481 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-nkj6g"] Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.602349 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0f915f-127e-4dc3-8550-c75361485387" path="/var/lib/kubelet/pods/eb0f915f-127e-4dc3-8550-c75361485387/volumes" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.752172 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.859211 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-ovndb-tls-certs\") pod \"688d82f6-748b-42fa-8595-b24a65ba77d3\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.859684 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9nkg\" (UniqueName: \"kubernetes.io/projected/688d82f6-748b-42fa-8595-b24a65ba77d3-kube-api-access-p9nkg\") pod \"688d82f6-748b-42fa-8595-b24a65ba77d3\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.859896 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-internal-tls-certs\") pod \"688d82f6-748b-42fa-8595-b24a65ba77d3\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.860069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-httpd-config\") pod \"688d82f6-748b-42fa-8595-b24a65ba77d3\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.860320 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-config\") pod \"688d82f6-748b-42fa-8595-b24a65ba77d3\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.860434 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-combined-ca-bundle\") pod \"688d82f6-748b-42fa-8595-b24a65ba77d3\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.860571 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-public-tls-certs\") pod \"688d82f6-748b-42fa-8595-b24a65ba77d3\" (UID: \"688d82f6-748b-42fa-8595-b24a65ba77d3\") " Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.868196 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688d82f6-748b-42fa-8595-b24a65ba77d3-kube-api-access-p9nkg" (OuterVolumeSpecName: "kube-api-access-p9nkg") pod "688d82f6-748b-42fa-8595-b24a65ba77d3" (UID: "688d82f6-748b-42fa-8595-b24a65ba77d3"). InnerVolumeSpecName "kube-api-access-p9nkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.868941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "688d82f6-748b-42fa-8595-b24a65ba77d3" (UID: "688d82f6-748b-42fa-8595-b24a65ba77d3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.930166 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "688d82f6-748b-42fa-8595-b24a65ba77d3" (UID: "688d82f6-748b-42fa-8595-b24a65ba77d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.936803 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-config" (OuterVolumeSpecName: "config") pod "688d82f6-748b-42fa-8595-b24a65ba77d3" (UID: "688d82f6-748b-42fa-8595-b24a65ba77d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.936960 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "688d82f6-748b-42fa-8595-b24a65ba77d3" (UID: "688d82f6-748b-42fa-8595-b24a65ba77d3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.945009 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "688d82f6-748b-42fa-8595-b24a65ba77d3" (UID: "688d82f6-748b-42fa-8595-b24a65ba77d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.965957 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "688d82f6-748b-42fa-8595-b24a65ba77d3" (UID: "688d82f6-748b-42fa-8595-b24a65ba77d3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.968068 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.968105 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.968116 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.968126 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9nkg\" (UniqueName: \"kubernetes.io/projected/688d82f6-748b-42fa-8595-b24a65ba77d3-kube-api-access-p9nkg\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.968139 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:23 crc kubenswrapper[4705]: I0124 08:02:23.968151 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.070049 4705 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/688d82f6-748b-42fa-8595-b24a65ba77d3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.375685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerStarted","Data":"e5dada2ea0ad39e1dc0188db0a969655e7b51bb6fc32531e425e5a82a20515fd"} Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.375770 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.378430 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b94fb6bf-sxxmr" event={"ID":"688d82f6-748b-42fa-8595-b24a65ba77d3","Type":"ContainerDied","Data":"4ea4bbe7fa8daa74c24823206ff8d6c30fc26b0d5440a29b1d1d67364e2b2046"} Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.378475 4705 scope.go:117] "RemoveContainer" containerID="a3098359904e16f8784ce49d0d993a5c7f30ea1518d1ed57f78d203e2b46e56f" Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.378575 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b94fb6bf-sxxmr" Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.390449 4705 generic.go:334] "Generic (PLEG): container finished" podID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerID="967d1f93ca97dcec847f9e488ccdb8e4e79a97f2592827c2c7e38e20a8072c8c" exitCode=0 Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.390592 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0dba7b53-b7e7-430c-bab3-5d5075b16fa3","Type":"ContainerDied","Data":"967d1f93ca97dcec847f9e488ccdb8e4e79a97f2592827c2c7e38e20a8072c8c"} Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.419335 4705 scope.go:117] "RemoveContainer" containerID="0ba3fb74191edff069c8e1acc25b146de11147efac97ca219f87b27119c5b1f8" Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.445236 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.738920801 podStartE2EDuration="7.445217785s" podCreationTimestamp="2026-01-24 08:02:17 +0000 UTC" firstStartedPulling="2026-01-24 08:02:18.359598649 +0000 UTC m=+1277.079471937" lastFinishedPulling="2026-01-24 08:02:23.065895633 +0000 UTC m=+1281.785768921" observedRunningTime="2026-01-24 08:02:24.418291542 +0000 UTC m=+1283.138164830" watchObservedRunningTime="2026-01-24 08:02:24.445217785 +0000 UTC m=+1283.165091063" Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.446875 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9b94fb6bf-sxxmr"] Jan 24 08:02:24 crc kubenswrapper[4705]: I0124 08:02:24.466224 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9b94fb6bf-sxxmr"] Jan 24 08:02:25 crc kubenswrapper[4705]: I0124 08:02:25.587430 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" path="/var/lib/kubelet/pods/688d82f6-748b-42fa-8595-b24a65ba77d3/volumes" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.432733 4705 generic.go:334] "Generic (PLEG): container finished" podID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerID="86bac3e0503430e17af877bf52998bf106f4485fb2afd86d46ab5c9cd8422b0c" exitCode=0 Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.432807 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0dba7b53-b7e7-430c-bab3-5d5075b16fa3","Type":"ContainerDied","Data":"86bac3e0503430e17af877bf52998bf106f4485fb2afd86d46ab5c9cd8422b0c"} Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.433098 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0dba7b53-b7e7-430c-bab3-5d5075b16fa3","Type":"ContainerDied","Data":"09dc86c9aae395c6dc57e2b8e959382c214f67daef53ec7442b723bbfc9cef49"} Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.433123 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09dc86c9aae395c6dc57e2b8e959382c214f67daef53ec7442b723bbfc9cef49" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.514388 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.565691 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data\") pod \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.565775 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-combined-ca-bundle\") pod \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.565845 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-scripts\") pod \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.565894 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data-custom\") pod \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.565961 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxz49\" (UniqueName: \"kubernetes.io/projected/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-kube-api-access-mxz49\") pod \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.565999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-etc-machine-id\") pod \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\" (UID: \"0dba7b53-b7e7-430c-bab3-5d5075b16fa3\") " Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.566618 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0dba7b53-b7e7-430c-bab3-5d5075b16fa3" (UID: "0dba7b53-b7e7-430c-bab3-5d5075b16fa3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.573091 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-scripts" (OuterVolumeSpecName: "scripts") pod "0dba7b53-b7e7-430c-bab3-5d5075b16fa3" (UID: "0dba7b53-b7e7-430c-bab3-5d5075b16fa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.578366 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0dba7b53-b7e7-430c-bab3-5d5075b16fa3" (UID: "0dba7b53-b7e7-430c-bab3-5d5075b16fa3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.581312 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-kube-api-access-mxz49" (OuterVolumeSpecName: "kube-api-access-mxz49") pod "0dba7b53-b7e7-430c-bab3-5d5075b16fa3" (UID: "0dba7b53-b7e7-430c-bab3-5d5075b16fa3"). InnerVolumeSpecName "kube-api-access-mxz49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.643091 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dba7b53-b7e7-430c-bab3-5d5075b16fa3" (UID: "0dba7b53-b7e7-430c-bab3-5d5075b16fa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.669537 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.671320 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.671357 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.671371 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxz49\" (UniqueName: \"kubernetes.io/projected/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-kube-api-access-mxz49\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.671507 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.751014 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data" (OuterVolumeSpecName: "config-data") pod "0dba7b53-b7e7-430c-bab3-5d5075b16fa3" (UID: "0dba7b53-b7e7-430c-bab3-5d5075b16fa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:26 crc kubenswrapper[4705]: I0124 08:02:26.773190 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba7b53-b7e7-430c-bab3-5d5075b16fa3-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.440638 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.479353 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.495777 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.505729 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:27 crc kubenswrapper[4705]: E0124 08:02:27.506228 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0f915f-127e-4dc3-8550-c75361485387" containerName="dnsmasq-dns" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506247 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0f915f-127e-4dc3-8550-c75361485387" containerName="dnsmasq-dns" Jan 24 08:02:27 crc kubenswrapper[4705]: E0124 08:02:27.506265 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="probe" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506272 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="probe" Jan 24 08:02:27 crc kubenswrapper[4705]: E0124 08:02:27.506303 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="cinder-scheduler" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506309 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="cinder-scheduler" Jan 24 08:02:27 crc kubenswrapper[4705]: E0124 08:02:27.506321 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0f915f-127e-4dc3-8550-c75361485387" containerName="init" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506327 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0f915f-127e-4dc3-8550-c75361485387" containerName="init" Jan 24 08:02:27 crc kubenswrapper[4705]: E0124 08:02:27.506335 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-api" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506341 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-api" Jan 24 08:02:27 crc kubenswrapper[4705]: E0124 08:02:27.506350 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-httpd" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506356 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-httpd" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506531 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="probe" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506545 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0f915f-127e-4dc3-8550-c75361485387" containerName="dnsmasq-dns" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506556 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" containerName="cinder-scheduler" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506568 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-httpd" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.506585 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="688d82f6-748b-42fa-8595-b24a65ba77d3" containerName="neutron-api" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.507785 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.510945 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.514652 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.587054 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dba7b53-b7e7-430c-bab3-5d5075b16fa3" path="/var/lib/kubelet/pods/0dba7b53-b7e7-430c-bab3-5d5075b16fa3/volumes" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.588560 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-config-data\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.588614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.588678 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vmk\" (UniqueName: \"kubernetes.io/projected/76eadf8b-3ddc-461f-b8d6-87978146e077-kube-api-access-s2vmk\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.588703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-scripts\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.588788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76eadf8b-3ddc-461f-b8d6-87978146e077-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.588841 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.674857 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.676016 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.677967 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.679056 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.679097 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jz96z" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.686868 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.691072 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-config-data\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.691123 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.691189 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2vmk\" (UniqueName: \"kubernetes.io/projected/76eadf8b-3ddc-461f-b8d6-87978146e077-kube-api-access-s2vmk\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.691215 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-scripts\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.691296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76eadf8b-3ddc-461f-b8d6-87978146e077-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.691328 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.691792 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76eadf8b-3ddc-461f-b8d6-87978146e077-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.700299 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.700564 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.702671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-scripts\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.713279 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eadf8b-3ddc-461f-b8d6-87978146e077-config-data\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.715418 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2vmk\" (UniqueName: \"kubernetes.io/projected/76eadf8b-3ddc-461f-b8d6-87978146e077-kube-api-access-s2vmk\") pod \"cinder-scheduler-0\" (UID: \"76eadf8b-3ddc-461f-b8d6-87978146e077\") " pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.794380 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config-secret\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.794537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.794663 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.794717 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4mnv\" (UniqueName: \"kubernetes.io/projected/fb2f0d71-c524-4d09-9924-5507ca1b4463-kube-api-access-g4mnv\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.824133 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.896566 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config-secret\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.896648 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.896736 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.896767 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4mnv\" (UniqueName: \"kubernetes.io/projected/fb2f0d71-c524-4d09-9924-5507ca1b4463-kube-api-access-g4mnv\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.898140 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.904373 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config-secret\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.909302 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.922560 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58599c4547-sbsm4"] Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.924464 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.927566 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.927692 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.928268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.934301 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4mnv\" (UniqueName: \"kubernetes.io/projected/fb2f0d71-c524-4d09-9924-5507ca1b4463-kube-api-access-g4mnv\") pod \"openstackclient\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " pod="openstack/openstackclient" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.967650 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58599c4547-sbsm4"] Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1cce5e47-bb96-4468-8818-29869d013b7b-log-httpd\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998165 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-internal-tls-certs\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998205 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cce5e47-bb96-4468-8818-29869d013b7b-etc-swift\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998291 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-public-tls-certs\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998349 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bxn\" (UniqueName: \"kubernetes.io/projected/1cce5e47-bb96-4468-8818-29869d013b7b-kube-api-access-c7bxn\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998501 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1cce5e47-bb96-4468-8818-29869d013b7b-run-httpd\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998549 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-config-data\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:27 crc kubenswrapper[4705]: I0124 08:02:27.998578 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-combined-ca-bundle\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.093238 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.095955 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.106779 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.107348 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bxn\" (UniqueName: \"kubernetes.io/projected/1cce5e47-bb96-4468-8818-29869d013b7b-kube-api-access-c7bxn\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.107763 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1cce5e47-bb96-4468-8818-29869d013b7b-run-httpd\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.107861 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-config-data\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.107937 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-combined-ca-bundle\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.109771 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1cce5e47-bb96-4468-8818-29869d013b7b-log-httpd\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.109833 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-internal-tls-certs\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.109871 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cce5e47-bb96-4468-8818-29869d013b7b-etc-swift\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.109952 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-public-tls-certs\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.108415 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1cce5e47-bb96-4468-8818-29869d013b7b-run-httpd\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.114750 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-combined-ca-bundle\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.115713 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-public-tls-certs\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.121649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-internal-tls-certs\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.121951 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1cce5e47-bb96-4468-8818-29869d013b7b-log-httpd\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.141064 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cce5e47-bb96-4468-8818-29869d013b7b-config-data\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.143941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cce5e47-bb96-4468-8818-29869d013b7b-etc-swift\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.144074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bxn\" (UniqueName: \"kubernetes.io/projected/1cce5e47-bb96-4468-8818-29869d013b7b-kube-api-access-c7bxn\") pod \"swift-proxy-58599c4547-sbsm4\" (UID: \"1cce5e47-bb96-4468-8818-29869d013b7b\") " pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.184360 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.188861 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.198553 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.310139 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.314120 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d722r\" (UniqueName: \"kubernetes.io/projected/b84bd122-47ef-448f-914d-c65c554fa7c1-kube-api-access-d722r\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.314257 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.314297 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.314391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: E0124 08:02:28.394697 4705 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 24 08:02:28 crc kubenswrapper[4705]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_fb2f0d71-c524-4d09-9924-5507ca1b4463_0(846b5ea79d6028a966bbc59c4370d1ed2f9dd24d83fed5d5751e7546127ff393): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"846b5ea79d6028a966bbc59c4370d1ed2f9dd24d83fed5d5751e7546127ff393" Netns:"/var/run/netns/44f359dc-2efa-4dda-b814-c0a7425c0b7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=846b5ea79d6028a966bbc59c4370d1ed2f9dd24d83fed5d5751e7546127ff393;K8S_POD_UID=fb2f0d71-c524-4d09-9924-5507ca1b4463" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/fb2f0d71-c524-4d09-9924-5507ca1b4463]: expected pod UID "fb2f0d71-c524-4d09-9924-5507ca1b4463" but got "b84bd122-47ef-448f-914d-c65c554fa7c1" from Kube API Jan 24 08:02:28 crc kubenswrapper[4705]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 24 08:02:28 crc kubenswrapper[4705]: > Jan 24 08:02:28 crc kubenswrapper[4705]: E0124 08:02:28.394856 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 24 08:02:28 crc kubenswrapper[4705]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_fb2f0d71-c524-4d09-9924-5507ca1b4463_0(846b5ea79d6028a966bbc59c4370d1ed2f9dd24d83fed5d5751e7546127ff393): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"846b5ea79d6028a966bbc59c4370d1ed2f9dd24d83fed5d5751e7546127ff393" Netns:"/var/run/netns/44f359dc-2efa-4dda-b814-c0a7425c0b7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=846b5ea79d6028a966bbc59c4370d1ed2f9dd24d83fed5d5751e7546127ff393;K8S_POD_UID=fb2f0d71-c524-4d09-9924-5507ca1b4463" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/fb2f0d71-c524-4d09-9924-5507ca1b4463]: expected pod UID "fb2f0d71-c524-4d09-9924-5507ca1b4463" but got "b84bd122-47ef-448f-914d-c65c554fa7c1" from Kube API Jan 24 08:02:28 crc kubenswrapper[4705]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 24 08:02:28 crc kubenswrapper[4705]: > pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.416196 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.416328 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d722r\" (UniqueName: \"kubernetes.io/projected/b84bd122-47ef-448f-914d-c65c554fa7c1-kube-api-access-d722r\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.416399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.416434 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.417516 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.423663 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config-secret\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.425473 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.435611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d722r\" (UniqueName: \"kubernetes.io/projected/b84bd122-47ef-448f-914d-c65c554fa7c1-kube-api-access-d722r\") pod \"openstackclient\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.453073 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.457002 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fb2f0d71-c524-4d09-9924-5507ca1b4463" podUID="b84bd122-47ef-448f-914d-c65c554fa7c1" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.468570 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.516146 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.532240 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.621479 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config-secret\") pod \"fb2f0d71-c524-4d09-9924-5507ca1b4463\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.621589 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-combined-ca-bundle\") pod \"fb2f0d71-c524-4d09-9924-5507ca1b4463\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.621773 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4mnv\" (UniqueName: \"kubernetes.io/projected/fb2f0d71-c524-4d09-9924-5507ca1b4463-kube-api-access-g4mnv\") pod \"fb2f0d71-c524-4d09-9924-5507ca1b4463\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.621862 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config\") pod \"fb2f0d71-c524-4d09-9924-5507ca1b4463\" (UID: \"fb2f0d71-c524-4d09-9924-5507ca1b4463\") " Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.623926 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fb2f0d71-c524-4d09-9924-5507ca1b4463" (UID: "fb2f0d71-c524-4d09-9924-5507ca1b4463"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.627610 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fb2f0d71-c524-4d09-9924-5507ca1b4463" (UID: "fb2f0d71-c524-4d09-9924-5507ca1b4463"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.629314 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb2f0d71-c524-4d09-9924-5507ca1b4463" (UID: "fb2f0d71-c524-4d09-9924-5507ca1b4463"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.630072 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2f0d71-c524-4d09-9924-5507ca1b4463-kube-api-access-g4mnv" (OuterVolumeSpecName: "kube-api-access-g4mnv") pod "fb2f0d71-c524-4d09-9924-5507ca1b4463" (UID: "fb2f0d71-c524-4d09-9924-5507ca1b4463"). InnerVolumeSpecName "kube-api-access-g4mnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.724633 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4mnv\" (UniqueName: \"kubernetes.io/projected/fb2f0d71-c524-4d09-9924-5507ca1b4463-kube-api-access-g4mnv\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.724665 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.724674 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.724684 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2f0d71-c524-4d09-9924-5507ca1b4463-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.768459 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.768712 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-central-agent" containerID="cri-o://7107a4f05d90ca40eea05d689e19558530577d0bfe309d1b11e511c6cbad041c" gracePeriod=30 Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.768797 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="proxy-httpd" containerID="cri-o://e5dada2ea0ad39e1dc0188db0a969655e7b51bb6fc32531e425e5a82a20515fd" gracePeriod=30 Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.768865 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="sg-core" containerID="cri-o://30819fe41c03ad3802061fd6d73a2f05ad94e2659ddeb188209f39b70767f2e8" gracePeriod=30 Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.768930 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-notification-agent" containerID="cri-o://633da75b7c84b689580791d007b55b4b2215b1d3a4e409488d0ea65ab740f5c3" gracePeriod=30 Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.861440 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:02:28 crc kubenswrapper[4705]: I0124 08:02:28.995496 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58599c4547-sbsm4"] Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.476743 4705 generic.go:334] "Generic (PLEG): container finished" podID="e15332d4-f888-4bee-94b6-2ea12e682089" containerID="e5dada2ea0ad39e1dc0188db0a969655e7b51bb6fc32531e425e5a82a20515fd" exitCode=0 Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.477523 4705 generic.go:334] "Generic (PLEG): container finished" podID="e15332d4-f888-4bee-94b6-2ea12e682089" containerID="30819fe41c03ad3802061fd6d73a2f05ad94e2659ddeb188209f39b70767f2e8" exitCode=2 Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.477688 4705 generic.go:334] "Generic (PLEG): container finished" podID="e15332d4-f888-4bee-94b6-2ea12e682089" containerID="633da75b7c84b689580791d007b55b4b2215b1d3a4e409488d0ea65ab740f5c3" exitCode=0 Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.477829 4705 generic.go:334] "Generic (PLEG): container finished" podID="e15332d4-f888-4bee-94b6-2ea12e682089" containerID="7107a4f05d90ca40eea05d689e19558530577d0bfe309d1b11e511c6cbad041c" exitCode=0 Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.477344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerDied","Data":"e5dada2ea0ad39e1dc0188db0a969655e7b51bb6fc32531e425e5a82a20515fd"} Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.478065 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerDied","Data":"30819fe41c03ad3802061fd6d73a2f05ad94e2659ddeb188209f39b70767f2e8"} Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.478092 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerDied","Data":"633da75b7c84b689580791d007b55b4b2215b1d3a4e409488d0ea65ab740f5c3"} Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.478105 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerDied","Data":"7107a4f05d90ca40eea05d689e19558530577d0bfe309d1b11e511c6cbad041c"} Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.480080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b84bd122-47ef-448f-914d-c65c554fa7c1","Type":"ContainerStarted","Data":"acabc3872bba658e2ae3aa6ba188837d682fd628b02b7e8cd73b44fa716f583e"} Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.481954 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76eadf8b-3ddc-461f-b8d6-87978146e077","Type":"ContainerStarted","Data":"cd3400c93c33b1cd723a8bf76973a989c87fa5b0f249a6e9c8a410f901f8006d"} Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.484378 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.488177 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58599c4547-sbsm4" event={"ID":"1cce5e47-bb96-4468-8818-29869d013b7b","Type":"ContainerStarted","Data":"caad78cd862137a587db9975cbfd8fad4deeb3819bcfcb2ab3762f8274af047e"} Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.493626 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fb2f0d71-c524-4d09-9924-5507ca1b4463" podUID="b84bd122-47ef-448f-914d-c65c554fa7c1" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.616985 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2f0d71-c524-4d09-9924-5507ca1b4463" path="/var/lib/kubelet/pods/fb2f0d71-c524-4d09-9924-5507ca1b4463/volumes" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.704501 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.753443 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-scripts\") pod \"e15332d4-f888-4bee-94b6-2ea12e682089\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.753931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxl5l\" (UniqueName: \"kubernetes.io/projected/e15332d4-f888-4bee-94b6-2ea12e682089-kube-api-access-kxl5l\") pod \"e15332d4-f888-4bee-94b6-2ea12e682089\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.754077 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-sg-core-conf-yaml\") pod \"e15332d4-f888-4bee-94b6-2ea12e682089\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.754132 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-log-httpd\") pod \"e15332d4-f888-4bee-94b6-2ea12e682089\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.754160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-config-data\") pod \"e15332d4-f888-4bee-94b6-2ea12e682089\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.754224 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-run-httpd\") pod \"e15332d4-f888-4bee-94b6-2ea12e682089\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.754284 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-combined-ca-bundle\") pod \"e15332d4-f888-4bee-94b6-2ea12e682089\" (UID: \"e15332d4-f888-4bee-94b6-2ea12e682089\") " Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.758716 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e15332d4-f888-4bee-94b6-2ea12e682089" (UID: "e15332d4-f888-4bee-94b6-2ea12e682089"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.759444 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-scripts" (OuterVolumeSpecName: "scripts") pod "e15332d4-f888-4bee-94b6-2ea12e682089" (UID: "e15332d4-f888-4bee-94b6-2ea12e682089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.762016 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e15332d4-f888-4bee-94b6-2ea12e682089" (UID: "e15332d4-f888-4bee-94b6-2ea12e682089"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.816668 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15332d4-f888-4bee-94b6-2ea12e682089-kube-api-access-kxl5l" (OuterVolumeSpecName: "kube-api-access-kxl5l") pod "e15332d4-f888-4bee-94b6-2ea12e682089" (UID: "e15332d4-f888-4bee-94b6-2ea12e682089"). InnerVolumeSpecName "kube-api-access-kxl5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.857028 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxl5l\" (UniqueName: \"kubernetes.io/projected/e15332d4-f888-4bee-94b6-2ea12e682089-kube-api-access-kxl5l\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.857064 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.857074 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e15332d4-f888-4bee-94b6-2ea12e682089-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.857082 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.896768 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e15332d4-f888-4bee-94b6-2ea12e682089" (UID: "e15332d4-f888-4bee-94b6-2ea12e682089"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.904655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e15332d4-f888-4bee-94b6-2ea12e682089" (UID: "e15332d4-f888-4bee-94b6-2ea12e682089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.958479 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:29 crc kubenswrapper[4705]: I0124 08:02:29.958518 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.100742 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-config-data" (OuterVolumeSpecName: "config-data") pod "e15332d4-f888-4bee-94b6-2ea12e682089" (UID: "e15332d4-f888-4bee-94b6-2ea12e682089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.166100 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15332d4-f888-4bee-94b6-2ea12e682089-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.189176 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.503757 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.504078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e15332d4-f888-4bee-94b6-2ea12e682089","Type":"ContainerDied","Data":"9afee2ec46039b5ea076e29bdb3eaccadff6c1cde76d854aeb36fd0898a1f16b"} Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.504141 4705 scope.go:117] "RemoveContainer" containerID="e5dada2ea0ad39e1dc0188db0a969655e7b51bb6fc32531e425e5a82a20515fd" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.508371 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76eadf8b-3ddc-461f-b8d6-87978146e077","Type":"ContainerStarted","Data":"865d61a3fc11d51708a6fedeed9a7df51332478bfae2f1bbb4e810c761e808fd"} Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.513655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58599c4547-sbsm4" event={"ID":"1cce5e47-bb96-4468-8818-29869d013b7b","Type":"ContainerStarted","Data":"905d401eeef12238b11d533ae8399362a16f0cb55a5bb7b45619cc5d8b50c1a5"} Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.513704 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58599c4547-sbsm4" event={"ID":"1cce5e47-bb96-4468-8818-29869d013b7b","Type":"ContainerStarted","Data":"bc9cd8270035e8207d293cf09c9c4c4ebca77d6c1f0e5a6477b8d855270fbdf7"} Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.514573 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.515028 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.543407 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58599c4547-sbsm4" podStartSLOduration=3.543391014 podStartE2EDuration="3.543391014s" podCreationTimestamp="2026-01-24 08:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:30.540481193 +0000 UTC m=+1289.260354491" watchObservedRunningTime="2026-01-24 08:02:30.543391014 +0000 UTC m=+1289.263264302" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.557873 4705 scope.go:117] "RemoveContainer" containerID="30819fe41c03ad3802061fd6d73a2f05ad94e2659ddeb188209f39b70767f2e8" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.580515 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.593640 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.604239 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:30 crc kubenswrapper[4705]: E0124 08:02:30.604774 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="proxy-httpd" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.604792 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="proxy-httpd" Jan 24 08:02:30 crc kubenswrapper[4705]: E0124 08:02:30.604815 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-notification-agent" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.604843 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-notification-agent" Jan 24 08:02:30 crc kubenswrapper[4705]: E0124 08:02:30.604854 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="sg-core" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.604861 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="sg-core" Jan 24 08:02:30 crc kubenswrapper[4705]: E0124 08:02:30.607519 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-central-agent" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.607549 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-central-agent" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.607946 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="sg-core" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.607961 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-central-agent" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.607973 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="ceilometer-notification-agent" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.607991 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" containerName="proxy-httpd" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.609692 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.620325 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.620337 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.636400 4705 scope.go:117] "RemoveContainer" containerID="633da75b7c84b689580791d007b55b4b2215b1d3a4e409488d0ea65ab740f5c3" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.640645 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.680019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-log-httpd\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.680081 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.680162 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.680183 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-run-httpd\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.680246 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-scripts\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.680267 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-config-data\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.680293 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdbh\" (UniqueName: \"kubernetes.io/projected/94d6e336-0bd6-4154-a05c-120d71b3e521-kube-api-access-stdbh\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.707400 4705 scope.go:117] "RemoveContainer" containerID="7107a4f05d90ca40eea05d689e19558530577d0bfe309d1b11e511c6cbad041c" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.783141 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-scripts\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.783214 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-config-data\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.783258 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stdbh\" (UniqueName: \"kubernetes.io/projected/94d6e336-0bd6-4154-a05c-120d71b3e521-kube-api-access-stdbh\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.783320 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-log-httpd\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.783364 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.783444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.783475 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-run-httpd\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.784143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-run-httpd\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.784899 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-log-httpd\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.790193 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.791518 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-config-data\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.793621 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-scripts\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.803587 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stdbh\" (UniqueName: \"kubernetes.io/projected/94d6e336-0bd6-4154-a05c-120d71b3e521-kube-api-access-stdbh\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.805265 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " pod="openstack/ceilometer-0" Jan 24 08:02:30 crc kubenswrapper[4705]: I0124 08:02:30.949340 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:31 crc kubenswrapper[4705]: I0124 08:02:31.539325 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76eadf8b-3ddc-461f-b8d6-87978146e077","Type":"ContainerStarted","Data":"c9dd99532c4b5c490a87f082e393f2f4aa1a29b448a6444ca96af75ac93d9eb6"} Jan 24 08:02:31 crc kubenswrapper[4705]: I0124 08:02:31.566312 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:31 crc kubenswrapper[4705]: I0124 08:02:31.593764 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e15332d4-f888-4bee-94b6-2ea12e682089" path="/var/lib/kubelet/pods/e15332d4-f888-4bee-94b6-2ea12e682089/volumes" Jan 24 08:02:31 crc kubenswrapper[4705]: I0124 08:02:31.595059 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.595040432 podStartE2EDuration="4.595040432s" podCreationTimestamp="2026-01-24 08:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:31.577121532 +0000 UTC m=+1290.296994820" watchObservedRunningTime="2026-01-24 08:02:31.595040432 +0000 UTC m=+1290.314913720" Jan 24 08:02:31 crc kubenswrapper[4705]: W0124 08:02:31.609584 4705 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dba7b53_b7e7_430c_bab3_5d5075b16fa3.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dba7b53_b7e7_430c_bab3_5d5075b16fa3.slice: no such file or directory Jan 24 08:02:31 crc kubenswrapper[4705]: W0124 08:02:31.610122 4705 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f4bbbf7_117f_4759_86b7_d5df766762e3.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f4bbbf7_117f_4759_86b7_d5df766762e3.slice: no such file or directory Jan 24 08:02:31 crc kubenswrapper[4705]: W0124 08:02:31.611118 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod787ad3bd_2593_42a7_b368_70abddcd74da.slice/crio-c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae.scope WatchSource:0}: Error finding container c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae: Status 404 returned error can't find the container with id c91e918230e493470a327c6187b94fa0aaf221efc2c16ad208a35fa23a7970ae Jan 24 08:02:31 crc kubenswrapper[4705]: W0124 08:02:31.623649 4705 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode15332d4_f888_4bee_94b6_2ea12e682089.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode15332d4_f888_4bee_94b6_2ea12e682089.slice: no such file or directory Jan 24 08:02:31 crc kubenswrapper[4705]: W0124 08:02:31.648278 4705 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb2f0d71_c524_4d09_9924_5507ca1b4463.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb2f0d71_c524_4d09_9924_5507ca1b4463.slice: no such file or directory Jan 24 08:02:32 crc kubenswrapper[4705]: E0124 08:02:32.430186 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod688d82f6_748b_42fa_8595_b24a65ba77d3.slice/crio-4ea4bbe7fa8daa74c24823206ff8d6c30fc26b0d5440a29b1d1d67364e2b2046\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3dfdcfb_291c_48dc_a111_6037cf854b1c.slice/crio-conmon-6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3dfdcfb_291c_48dc_a111_6037cf854b1c.slice/crio-6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod688d82f6_748b_42fa_8595_b24a65ba77d3.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.526901 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.637962 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data-custom\") pod \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.638442 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zd5n\" (UniqueName: \"kubernetes.io/projected/f3dfdcfb-291c-48dc-a111-6037cf854b1c-kube-api-access-9zd5n\") pod \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.638504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfdcfb-291c-48dc-a111-6037cf854b1c-logs\") pod \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.638547 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-combined-ca-bundle\") pod \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.638596 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data\") pod \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\" (UID: \"f3dfdcfb-291c-48dc-a111-6037cf854b1c\") " Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.639029 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3dfdcfb-291c-48dc-a111-6037cf854b1c-logs" (OuterVolumeSpecName: "logs") pod "f3dfdcfb-291c-48dc-a111-6037cf854b1c" (UID: "f3dfdcfb-291c-48dc-a111-6037cf854b1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.640607 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dfdcfb-291c-48dc-a111-6037cf854b1c-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.645212 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3dfdcfb-291c-48dc-a111-6037cf854b1c-kube-api-access-9zd5n" (OuterVolumeSpecName: "kube-api-access-9zd5n") pod "f3dfdcfb-291c-48dc-a111-6037cf854b1c" (UID: "f3dfdcfb-291c-48dc-a111-6037cf854b1c"). InnerVolumeSpecName "kube-api-access-9zd5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.659450 4705 generic.go:334] "Generic (PLEG): container finished" podID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerID="6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe" exitCode=137 Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.659546 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.659600 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" event={"ID":"f3dfdcfb-291c-48dc-a111-6037cf854b1c","Type":"ContainerDied","Data":"6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe"} Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.660241 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d459d77f8-jpdgn" event={"ID":"f3dfdcfb-291c-48dc-a111-6037cf854b1c","Type":"ContainerDied","Data":"3ae1d3a752ae87f5ceb046433a968f0d4ac7d9cf106d84beab45dacc2aa251f7"} Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.660269 4705 scope.go:117] "RemoveContainer" containerID="6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.661852 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3dfdcfb-291c-48dc-a111-6037cf854b1c" (UID: "f3dfdcfb-291c-48dc-a111-6037cf854b1c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.667261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerStarted","Data":"28fdfb2d43d3eb24b6b3b47f4aa1e8d43eedf0fa01b2bb0c01025b8732a3640a"} Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.680798 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3dfdcfb-291c-48dc-a111-6037cf854b1c" (UID: "f3dfdcfb-291c-48dc-a111-6037cf854b1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.732610 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data" (OuterVolumeSpecName: "config-data") pod "f3dfdcfb-291c-48dc-a111-6037cf854b1c" (UID: "f3dfdcfb-291c-48dc-a111-6037cf854b1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.742319 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.742348 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zd5n\" (UniqueName: \"kubernetes.io/projected/f3dfdcfb-291c-48dc-a111-6037cf854b1c-kube-api-access-9zd5n\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.742359 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.742370 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dfdcfb-291c-48dc-a111-6037cf854b1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.826212 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.827772 4705 scope.go:117] "RemoveContainer" containerID="c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.868509 4705 scope.go:117] "RemoveContainer" containerID="6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe" Jan 24 08:02:32 crc kubenswrapper[4705]: E0124 08:02:32.869651 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe\": container with ID starting with 6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe not found: ID does not exist" containerID="6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.869722 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe"} err="failed to get container status \"6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe\": rpc error: code = NotFound desc = could not find container \"6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe\": container with ID starting with 6edb5fe6d24491ee37021a4455ba1e528f8d12d4d5238b70466a7f848ca210fe not found: ID does not exist" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.869752 4705 scope.go:117] "RemoveContainer" containerID="c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072" Jan 24 08:02:32 crc kubenswrapper[4705]: E0124 08:02:32.870862 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072\": container with ID starting with c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072 not found: ID does not exist" containerID="c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072" Jan 24 08:02:32 crc kubenswrapper[4705]: I0124 08:02:32.870898 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072"} err="failed to get container status \"c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072\": rpc error: code = NotFound desc = could not find container \"c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072\": container with ID starting with c18857cee3cdac665471f47c44dbd65d3fcba5f6578628cfa407422f76d5d072 not found: ID does not exist" Jan 24 08:02:33 crc kubenswrapper[4705]: I0124 08:02:33.003093 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5d459d77f8-jpdgn"] Jan 24 08:02:33 crc kubenswrapper[4705]: I0124 08:02:33.022706 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5d459d77f8-jpdgn"] Jan 24 08:02:33 crc kubenswrapper[4705]: I0124 08:02:33.599511 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" path="/var/lib/kubelet/pods/f3dfdcfb-291c-48dc-a111-6037cf854b1c/volumes" Jan 24 08:02:33 crc kubenswrapper[4705]: I0124 08:02:33.684935 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerStarted","Data":"f0fff64361a841fe1a960a7a88ccb1739b2e15220bb487e2130b7d135e0156ee"} Jan 24 08:02:33 crc kubenswrapper[4705]: I0124 08:02:33.684998 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerStarted","Data":"3dedea5bf3b6a790031e494086028fa3cdddcbc35fc4b053b113dd0e42b5ed47"} Jan 24 08:02:34 crc kubenswrapper[4705]: I0124 08:02:34.712726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerStarted","Data":"7960c2ae5465ecc9910ca2eb61f0a06057ebe5eccc3c68a70f0df07114d2bf74"} Jan 24 08:02:35 crc kubenswrapper[4705]: I0124 08:02:35.270483 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.093988 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.319296 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.319990 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58599c4547-sbsm4" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.947685 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-dbcdf5676-895jp"] Jan 24 08:02:38 crc kubenswrapper[4705]: E0124 08:02:38.948127 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener-log" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.948141 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener-log" Jan 24 08:02:38 crc kubenswrapper[4705]: E0124 08:02:38.948176 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.948185 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.948368 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.948393 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3dfdcfb-291c-48dc-a111-6037cf854b1c" containerName="barbican-keystone-listener-log" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.949908 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.954536 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-scbxs" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.954864 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.955143 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.968555 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data-custom\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.968700 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-combined-ca-bundle\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.968750 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxqk\" (UniqueName: \"kubernetes.io/projected/b2713791-b8f1-47ae-a438-c2f5e97ef433-kube-api-access-7dxqk\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.968984 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:38 crc kubenswrapper[4705]: I0124 08:02:38.987414 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dbcdf5676-895jp"] Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.084940 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.086341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data-custom\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.086602 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-combined-ca-bundle\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.086713 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxqk\" (UniqueName: \"kubernetes.io/projected/b2713791-b8f1-47ae-a438-c2f5e97ef433-kube-api-access-7dxqk\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.100008 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data-custom\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.103521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-combined-ca-bundle\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.124071 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.138954 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-rv7zw"] Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.140770 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.173267 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxqk\" (UniqueName: \"kubernetes.io/projected/b2713791-b8f1-47ae-a438-c2f5e97ef433-kube-api-access-7dxqk\") pod \"heat-engine-dbcdf5676-895jp\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.204314 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7f2h\" (UniqueName: \"kubernetes.io/projected/8679a97c-a310-4bec-945f-4fb2756b3ff6-kube-api-access-j7f2h\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.204370 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.204391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.204441 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.204503 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-config\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.204605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.210156 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-rv7zw"] Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.248646 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f6475444-p69rq"] Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.262886 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.275285 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.287613 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.307484 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-config\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308029 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data-custom\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308125 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn628\" (UniqueName: \"kubernetes.io/projected/fffaad63-99ef-4990-957b-3433d20528c2-kube-api-access-bn628\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308157 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-combined-ca-bundle\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308224 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308316 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7f2h\" (UniqueName: \"kubernetes.io/projected/8679a97c-a310-4bec-945f-4fb2756b3ff6-kube-api-access-j7f2h\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308364 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308388 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.308431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.309814 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.310097 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.313711 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-config\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.316619 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.319890 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.341920 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f6475444-p69rq"] Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.350678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7f2h\" (UniqueName: \"kubernetes.io/projected/8679a97c-a310-4bec-945f-4fb2756b3ff6-kube-api-access-j7f2h\") pod \"dnsmasq-dns-688b9f5b49-rv7zw\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.395439 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7bcb574654-nws5h"] Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.396694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.401122 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.412854 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.412991 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data-custom\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.413064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn628\" (UniqueName: \"kubernetes.io/projected/fffaad63-99ef-4990-957b-3433d20528c2-kube-api-access-bn628\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.413097 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-combined-ca-bundle\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.429449 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-combined-ca-bundle\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.436178 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.470001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data-custom\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.478687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn628\" (UniqueName: \"kubernetes.io/projected/fffaad63-99ef-4990-957b-3433d20528c2-kube-api-access-bn628\") pod \"heat-cfnapi-7f6475444-p69rq\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.484919 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7bcb574654-nws5h"] Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.517670 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.517782 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data-custom\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.517916 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd749\" (UniqueName: \"kubernetes.io/projected/c8875877-e577-4bf1-a74b-60931b57036c-kube-api-access-zd749\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.518046 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-combined-ca-bundle\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.576655 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.621657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.621802 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data-custom\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.622022 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd749\" (UniqueName: \"kubernetes.io/projected/c8875877-e577-4bf1-a74b-60931b57036c-kube-api-access-zd749\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.622065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-combined-ca-bundle\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.626413 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data-custom\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.627131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.629304 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-combined-ca-bundle\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.643547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.643688 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd749\" (UniqueName: \"kubernetes.io/projected/c8875877-e577-4bf1-a74b-60931b57036c-kube-api-access-zd749\") pod \"heat-api-7bcb574654-nws5h\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:39 crc kubenswrapper[4705]: I0124 08:02:39.809406 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.539436 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dbcdf5676-895jp"] Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.717062 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7bcb574654-nws5h"] Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.726368 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f6475444-p69rq"] Jan 24 08:02:44 crc kubenswrapper[4705]: W0124 08:02:44.733892 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfffaad63_99ef_4990_957b_3433d20528c2.slice/crio-dc2aab55633cc725e3bc65b8678618f585a872aae38411d66055631fd7c0cc85 WatchSource:0}: Error finding container dc2aab55633cc725e3bc65b8678618f585a872aae38411d66055631fd7c0cc85: Status 404 returned error can't find the container with id dc2aab55633cc725e3bc65b8678618f585a872aae38411d66055631fd7c0cc85 Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.735121 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-rv7zw"] Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.825422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b84bd122-47ef-448f-914d-c65c554fa7c1","Type":"ContainerStarted","Data":"e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9"} Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.827383 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dbcdf5676-895jp" event={"ID":"b2713791-b8f1-47ae-a438-c2f5e97ef433","Type":"ContainerStarted","Data":"b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8"} Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.827454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dbcdf5676-895jp" event={"ID":"b2713791-b8f1-47ae-a438-c2f5e97ef433","Type":"ContainerStarted","Data":"c8cb21deda4ef7327d9fec807e9638f45b7508ec46af830ac40e9ab2c652468c"} Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.827629 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.829060 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f6475444-p69rq" event={"ID":"fffaad63-99ef-4990-957b-3433d20528c2","Type":"ContainerStarted","Data":"dc2aab55633cc725e3bc65b8678618f585a872aae38411d66055631fd7c0cc85"} Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.830595 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" event={"ID":"8679a97c-a310-4bec-945f-4fb2756b3ff6","Type":"ContainerStarted","Data":"433fcd95bd882b20cbef2800cde532ca1768b690c8cfd8224c505eb45d69e5db"} Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.836533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerStarted","Data":"9da6c0976b8c1c07d6d5c3c8287425fd638101a7b13beee8987000c8ed1c0c5c"} Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.836767 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-central-agent" containerID="cri-o://3dedea5bf3b6a790031e494086028fa3cdddcbc35fc4b053b113dd0e42b5ed47" gracePeriod=30 Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.836892 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.836946 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="proxy-httpd" containerID="cri-o://9da6c0976b8c1c07d6d5c3c8287425fd638101a7b13beee8987000c8ed1c0c5c" gracePeriod=30 Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.836996 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="sg-core" containerID="cri-o://7960c2ae5465ecc9910ca2eb61f0a06057ebe5eccc3c68a70f0df07114d2bf74" gracePeriod=30 Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.837047 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-notification-agent" containerID="cri-o://f0fff64361a841fe1a960a7a88ccb1739b2e15220bb487e2130b7d135e0156ee" gracePeriod=30 Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.850557 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bcb574654-nws5h" event={"ID":"c8875877-e577-4bf1-a74b-60931b57036c","Type":"ContainerStarted","Data":"4c2b255eb21c11263993b300ea7a06843bd242b574de60b8f94aa0aa8168ba43"} Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.861054 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.6817035809999998 podStartE2EDuration="16.861032597s" podCreationTimestamp="2026-01-24 08:02:28 +0000 UTC" firstStartedPulling="2026-01-24 08:02:28.868277267 +0000 UTC m=+1287.588150565" lastFinishedPulling="2026-01-24 08:02:44.047606283 +0000 UTC m=+1302.767479581" observedRunningTime="2026-01-24 08:02:44.849136324 +0000 UTC m=+1303.569009632" watchObservedRunningTime="2026-01-24 08:02:44.861032597 +0000 UTC m=+1303.580905875" Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.877991 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.412946052 podStartE2EDuration="14.877967381s" podCreationTimestamp="2026-01-24 08:02:30 +0000 UTC" firstStartedPulling="2026-01-24 08:02:31.585975919 +0000 UTC m=+1290.305849207" lastFinishedPulling="2026-01-24 08:02:44.050997248 +0000 UTC m=+1302.770870536" observedRunningTime="2026-01-24 08:02:44.87653144 +0000 UTC m=+1303.596404728" watchObservedRunningTime="2026-01-24 08:02:44.877967381 +0000 UTC m=+1303.597840669" Jan 24 08:02:44 crc kubenswrapper[4705]: I0124 08:02:44.909916 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-dbcdf5676-895jp" podStartSLOduration=6.909804391 podStartE2EDuration="6.909804391s" podCreationTimestamp="2026-01-24 08:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:44.900562813 +0000 UTC m=+1303.620436101" watchObservedRunningTime="2026-01-24 08:02:44.909804391 +0000 UTC m=+1303.629677679" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.388767 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bcd878cb5-xnt7l" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.449456 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-77b89ffb5c-ntbn2"] Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.451997 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.466732 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7798c79c68-jdzb7"] Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.485558 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.510099 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77b89ffb5c-ntbn2"] Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.549397 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7798c79c68-jdzb7"] Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.557834 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b8dc4cdfc-hwv79"] Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.568862 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.574361 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b8dc4cdfc-hwv79"] Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.592943 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data-custom\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.597340 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-config-data\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.597660 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktl4\" (UniqueName: \"kubernetes.io/projected/8fc5b3f3-290d-493a-921c-b114b7c2fd98-kube-api-access-xktl4\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.597965 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-combined-ca-bundle\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.598129 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-combined-ca-bundle\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.598316 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.598667 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-config-data-custom\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.598862 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbrd\" (UniqueName: \"kubernetes.io/projected/d28ccc81-d764-4810-a649-42ff56ae43c8-kube-api-access-2xbrd\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.606248 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77b96594b8-9jfp2"] Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.606482 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77b96594b8-9jfp2" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-api" containerID="cri-o://dea2276ce0890634faad2f01c08bc3abb5132cd1425bd3cf9af51cd32d6af9a9" gracePeriod=30 Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.607246 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77b96594b8-9jfp2" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-httpd" containerID="cri-o://46fb9222c6415345ec7a8700123b928768dcdad6f528b20811dbe265f3c214b2" gracePeriod=30 Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.701766 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.701842 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8k4\" (UniqueName: \"kubernetes.io/projected/699f900a-a29f-4b4b-b38a-102f8d440596-kube-api-access-nr8k4\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.701914 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data-custom\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.701969 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data-custom\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702010 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-config-data\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702048 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-combined-ca-bundle\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702075 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktl4\" (UniqueName: \"kubernetes.io/projected/8fc5b3f3-290d-493a-921c-b114b7c2fd98-kube-api-access-xktl4\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-combined-ca-bundle\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702211 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-combined-ca-bundle\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702231 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702265 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-config-data-custom\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.702313 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbrd\" (UniqueName: \"kubernetes.io/projected/d28ccc81-d764-4810-a649-42ff56ae43c8-kube-api-access-2xbrd\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.714027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-combined-ca-bundle\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.714803 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-combined-ca-bundle\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.718049 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.728483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-config-data\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.733322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d28ccc81-d764-4810-a649-42ff56ae43c8-config-data-custom\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.734518 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbrd\" (UniqueName: \"kubernetes.io/projected/d28ccc81-d764-4810-a649-42ff56ae43c8-kube-api-access-2xbrd\") pod \"heat-engine-7798c79c68-jdzb7\" (UID: \"d28ccc81-d764-4810-a649-42ff56ae43c8\") " pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.734865 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data-custom\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.750759 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktl4\" (UniqueName: \"kubernetes.io/projected/8fc5b3f3-290d-493a-921c-b114b7c2fd98-kube-api-access-xktl4\") pod \"heat-api-77b89ffb5c-ntbn2\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.804532 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-combined-ca-bundle\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.804678 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.804696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8k4\" (UniqueName: \"kubernetes.io/projected/699f900a-a29f-4b4b-b38a-102f8d440596-kube-api-access-nr8k4\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.804743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data-custom\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.810549 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-combined-ca-bundle\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.817862 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.826535 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data-custom\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.836387 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8k4\" (UniqueName: \"kubernetes.io/projected/699f900a-a29f-4b4b-b38a-102f8d440596-kube-api-access-nr8k4\") pod \"heat-cfnapi-7b8dc4cdfc-hwv79\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.932927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.935102 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:45 crc kubenswrapper[4705]: I0124 08:02:45.945490 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.006683 4705 generic.go:334] "Generic (PLEG): container finished" podID="266a132c-d822-44b5-a75c-e359c65c78ea" containerID="46fb9222c6415345ec7a8700123b928768dcdad6f528b20811dbe265f3c214b2" exitCode=0 Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.006748 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b96594b8-9jfp2" event={"ID":"266a132c-d822-44b5-a75c-e359c65c78ea","Type":"ContainerDied","Data":"46fb9222c6415345ec7a8700123b928768dcdad6f528b20811dbe265f3c214b2"} Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.011514 4705 generic.go:334] "Generic (PLEG): container finished" podID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerID="20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a" exitCode=0 Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.011664 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" event={"ID":"8679a97c-a310-4bec-945f-4fb2756b3ff6","Type":"ContainerDied","Data":"20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a"} Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.047840 4705 generic.go:334] "Generic (PLEG): container finished" podID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerID="9da6c0976b8c1c07d6d5c3c8287425fd638101a7b13beee8987000c8ed1c0c5c" exitCode=0 Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.047886 4705 generic.go:334] "Generic (PLEG): container finished" podID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerID="7960c2ae5465ecc9910ca2eb61f0a06057ebe5eccc3c68a70f0df07114d2bf74" exitCode=2 Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.047895 4705 generic.go:334] "Generic (PLEG): container finished" podID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerID="3dedea5bf3b6a790031e494086028fa3cdddcbc35fc4b053b113dd0e42b5ed47" exitCode=0 Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.048225 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerDied","Data":"9da6c0976b8c1c07d6d5c3c8287425fd638101a7b13beee8987000c8ed1c0c5c"} Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.048288 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerDied","Data":"7960c2ae5465ecc9910ca2eb61f0a06057ebe5eccc3c68a70f0df07114d2bf74"} Jan 24 08:02:46 crc kubenswrapper[4705]: I0124 08:02:46.048300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerDied","Data":"3dedea5bf3b6a790031e494086028fa3cdddcbc35fc4b053b113dd0e42b5ed47"} Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.063511 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7798c79c68-jdzb7"] Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.129116 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" event={"ID":"8679a97c-a310-4bec-945f-4fb2756b3ff6","Type":"ContainerStarted","Data":"44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3"} Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.129201 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.152073 4705 generic.go:334] "Generic (PLEG): container finished" podID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerID="f0fff64361a841fe1a960a7a88ccb1739b2e15220bb487e2130b7d135e0156ee" exitCode=0 Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.152121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerDied","Data":"f0fff64361a841fe1a960a7a88ccb1739b2e15220bb487e2130b7d135e0156ee"} Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.173930 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b8dc4cdfc-hwv79"] Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.175101 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" podStartSLOduration=8.175083587 podStartE2EDuration="8.175083587s" podCreationTimestamp="2026-01-24 08:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:47.161333292 +0000 UTC m=+1305.881206580" watchObservedRunningTime="2026-01-24 08:02:47.175083587 +0000 UTC m=+1305.894956875" Jan 24 08:02:47 crc kubenswrapper[4705]: W0124 08:02:47.182433 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod699f900a_a29f_4b4b_b38a_102f8d440596.slice/crio-a2a4808df1765d8fe44bc5c51823ba237d9ac518b6a3bb956ac055d1ad7f11a2 WatchSource:0}: Error finding container a2a4808df1765d8fe44bc5c51823ba237d9ac518b6a3bb956ac055d1ad7f11a2: Status 404 returned error can't find the container with id a2a4808df1765d8fe44bc5c51823ba237d9ac518b6a3bb956ac055d1ad7f11a2 Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.253888 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77b89ffb5c-ntbn2"] Jan 24 08:02:47 crc kubenswrapper[4705]: W0124 08:02:47.287078 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fc5b3f3_290d_493a_921c_b114b7c2fd98.slice/crio-1b7649ced2a6e2d92bd80a266a6553c3396fa6c284a4f050c96f09bba8efbe30 WatchSource:0}: Error finding container 1b7649ced2a6e2d92bd80a266a6553c3396fa6c284a4f050c96f09bba8efbe30: Status 404 returned error can't find the container with id 1b7649ced2a6e2d92bd80a266a6553c3396fa6c284a4f050c96f09bba8efbe30 Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.659190 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.818587 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-log-httpd\") pod \"94d6e336-0bd6-4154-a05c-120d71b3e521\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.818660 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-config-data\") pod \"94d6e336-0bd6-4154-a05c-120d71b3e521\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.818700 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-combined-ca-bundle\") pod \"94d6e336-0bd6-4154-a05c-120d71b3e521\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.818894 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stdbh\" (UniqueName: \"kubernetes.io/projected/94d6e336-0bd6-4154-a05c-120d71b3e521-kube-api-access-stdbh\") pod \"94d6e336-0bd6-4154-a05c-120d71b3e521\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.818942 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-scripts\") pod \"94d6e336-0bd6-4154-a05c-120d71b3e521\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.818971 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-run-httpd\") pod \"94d6e336-0bd6-4154-a05c-120d71b3e521\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.819069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-sg-core-conf-yaml\") pod \"94d6e336-0bd6-4154-a05c-120d71b3e521\" (UID: \"94d6e336-0bd6-4154-a05c-120d71b3e521\") " Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.820475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "94d6e336-0bd6-4154-a05c-120d71b3e521" (UID: "94d6e336-0bd6-4154-a05c-120d71b3e521"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.827137 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "94d6e336-0bd6-4154-a05c-120d71b3e521" (UID: "94d6e336-0bd6-4154-a05c-120d71b3e521"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.828238 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.828271 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94d6e336-0bd6-4154-a05c-120d71b3e521-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.829790 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d6e336-0bd6-4154-a05c-120d71b3e521-kube-api-access-stdbh" (OuterVolumeSpecName: "kube-api-access-stdbh") pod "94d6e336-0bd6-4154-a05c-120d71b3e521" (UID: "94d6e336-0bd6-4154-a05c-120d71b3e521"). InnerVolumeSpecName "kube-api-access-stdbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.832531 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-scripts" (OuterVolumeSpecName: "scripts") pod "94d6e336-0bd6-4154-a05c-120d71b3e521" (UID: "94d6e336-0bd6-4154-a05c-120d71b3e521"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.874771 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "94d6e336-0bd6-4154-a05c-120d71b3e521" (UID: "94d6e336-0bd6-4154-a05c-120d71b3e521"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.930866 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.931189 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stdbh\" (UniqueName: \"kubernetes.io/projected/94d6e336-0bd6-4154-a05c-120d71b3e521-kube-api-access-stdbh\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.931756 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.939953 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d6e336-0bd6-4154-a05c-120d71b3e521" (UID: "94d6e336-0bd6-4154-a05c-120d71b3e521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:47 crc kubenswrapper[4705]: I0124 08:02:47.974600 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-config-data" (OuterVolumeSpecName: "config-data") pod "94d6e336-0bd6-4154-a05c-120d71b3e521" (UID: "94d6e336-0bd6-4154-a05c-120d71b3e521"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.036422 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.036964 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d6e336-0bd6-4154-a05c-120d71b3e521-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.165545 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77b89ffb5c-ntbn2" event={"ID":"8fc5b3f3-290d-493a-921c-b114b7c2fd98","Type":"ContainerStarted","Data":"1b7649ced2a6e2d92bd80a266a6553c3396fa6c284a4f050c96f09bba8efbe30"} Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.169346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94d6e336-0bd6-4154-a05c-120d71b3e521","Type":"ContainerDied","Data":"28fdfb2d43d3eb24b6b3b47f4aa1e8d43eedf0fa01b2bb0c01025b8732a3640a"} Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.169504 4705 scope.go:117] "RemoveContainer" containerID="9da6c0976b8c1c07d6d5c3c8287425fd638101a7b13beee8987000c8ed1c0c5c" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.169769 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.171618 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7798c79c68-jdzb7" event={"ID":"d28ccc81-d764-4810-a649-42ff56ae43c8","Type":"ContainerStarted","Data":"44ddf7048e7b63825a9a797b94bcd202e7dac6daa8a83054298621740c1a2bd7"} Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.171666 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7798c79c68-jdzb7" event={"ID":"d28ccc81-d764-4810-a649-42ff56ae43c8","Type":"ContainerStarted","Data":"aa192468248aaf0643f8730ae1a129345d562e22243d9d774a3477b6a393437a"} Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.171974 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.173988 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" event={"ID":"699f900a-a29f-4b4b-b38a-102f8d440596","Type":"ContainerStarted","Data":"a2a4808df1765d8fe44bc5c51823ba237d9ac518b6a3bb956ac055d1ad7f11a2"} Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.233124 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7798c79c68-jdzb7" podStartSLOduration=3.233098272 podStartE2EDuration="3.233098272s" podCreationTimestamp="2026-01-24 08:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:48.20264871 +0000 UTC m=+1306.922522048" watchObservedRunningTime="2026-01-24 08:02:48.233098272 +0000 UTC m=+1306.952971560" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.297598 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.330528 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.350624 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:48 crc kubenswrapper[4705]: E0124 08:02:48.351478 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-notification-agent" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351502 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-notification-agent" Jan 24 08:02:48 crc kubenswrapper[4705]: E0124 08:02:48.351522 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="sg-core" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351530 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="sg-core" Jan 24 08:02:48 crc kubenswrapper[4705]: E0124 08:02:48.351547 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="proxy-httpd" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351553 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="proxy-httpd" Jan 24 08:02:48 crc kubenswrapper[4705]: E0124 08:02:48.351562 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-central-agent" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351568 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-central-agent" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351759 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-notification-agent" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351776 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="proxy-httpd" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351788 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="ceilometer-central-agent" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.351802 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" containerName="sg-core" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.354087 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.357553 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.357809 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.367521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.367636 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.367684 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-scripts\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.367791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.367944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-config-data\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.367978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.368049 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn2jq\" (UniqueName: \"kubernetes.io/projected/e29114b1-8009-4cf4-8eef-20c19e0687d2-kube-api-access-cn2jq\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.387202 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.478184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-config-data\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.478250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.478335 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn2jq\" (UniqueName: \"kubernetes.io/projected/e29114b1-8009-4cf4-8eef-20c19e0687d2-kube-api-access-cn2jq\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.478393 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.478469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.478507 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-scripts\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.478594 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.479202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.480316 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.487118 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-scripts\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.487320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-config-data\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.488465 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.494729 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.501565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn2jq\" (UniqueName: \"kubernetes.io/projected/e29114b1-8009-4cf4-8eef-20c19e0687d2-kube-api-access-cn2jq\") pod \"ceilometer-0\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " pod="openstack/ceilometer-0" Jan 24 08:02:48 crc kubenswrapper[4705]: I0124 08:02:48.682003 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.108774 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7bcb574654-nws5h"] Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.138938 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f6475444-p69rq"] Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.156730 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6bbd698cdd-bj25j"] Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.157996 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.166097 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.166488 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.209562 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6bbd698cdd-bj25j"] Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.219048 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-internal-tls-certs\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.219344 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqcpt\" (UniqueName: \"kubernetes.io/projected/bb805121-ae65-457e-877f-2db0ae5e61dc-kube-api-access-gqcpt\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.219490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-config-data-custom\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.219591 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-config-data\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.219683 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-combined-ca-bundle\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.219895 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-public-tls-certs\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.223870 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-67559d7f8-s8rzr"] Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.225492 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.229992 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.230268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.309061 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67559d7f8-s8rzr"] Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.324840 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-config-data-custom\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.325286 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-public-tls-certs\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.325486 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-combined-ca-bundle\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.325795 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-internal-tls-certs\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.325975 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-internal-tls-certs\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.328067 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprx7\" (UniqueName: \"kubernetes.io/projected/13afed15-05ec-4ae6-a29f-5c9226770a19-kube-api-access-nprx7\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.329426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqcpt\" (UniqueName: \"kubernetes.io/projected/bb805121-ae65-457e-877f-2db0ae5e61dc-kube-api-access-gqcpt\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.329557 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-public-tls-certs\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.329708 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-config-data-custom\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.329974 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-config-data\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.330183 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-combined-ca-bundle\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.330303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-config-data\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.340559 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-config-data-custom\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.341761 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-internal-tls-certs\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.345078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-combined-ca-bundle\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.356912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-config-data\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.363032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb805121-ae65-457e-877f-2db0ae5e61dc-public-tls-certs\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.431942 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqcpt\" (UniqueName: \"kubernetes.io/projected/bb805121-ae65-457e-877f-2db0ae5e61dc-kube-api-access-gqcpt\") pod \"heat-api-6bbd698cdd-bj25j\" (UID: \"bb805121-ae65-457e-877f-2db0ae5e61dc\") " pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.432930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-public-tls-certs\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.433027 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-config-data\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.433061 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-config-data-custom\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.433096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-combined-ca-bundle\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.433131 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-internal-tls-certs\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.433182 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nprx7\" (UniqueName: \"kubernetes.io/projected/13afed15-05ec-4ae6-a29f-5c9226770a19-kube-api-access-nprx7\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.457981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-combined-ca-bundle\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.463289 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-public-tls-certs\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.474009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-config-data-custom\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.475941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-config-data\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.478008 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13afed15-05ec-4ae6-a29f-5c9226770a19-internal-tls-certs\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.507156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nprx7\" (UniqueName: \"kubernetes.io/projected/13afed15-05ec-4ae6-a29f-5c9226770a19-kube-api-access-nprx7\") pod \"heat-cfnapi-67559d7f8-s8rzr\" (UID: \"13afed15-05ec-4ae6-a29f-5c9226770a19\") " pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.526591 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.579983 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:49 crc kubenswrapper[4705]: I0124 08:02:49.612019 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d6e336-0bd6-4154-a05c-120d71b3e521" path="/var/lib/kubelet/pods/94d6e336-0bd6-4154-a05c-120d71b3e521/volumes" Jan 24 08:02:50 crc kubenswrapper[4705]: I0124 08:02:50.548230 4705 scope.go:117] "RemoveContainer" containerID="7960c2ae5465ecc9910ca2eb61f0a06057ebe5eccc3c68a70f0df07114d2bf74" Jan 24 08:02:50 crc kubenswrapper[4705]: I0124 08:02:50.986340 4705 scope.go:117] "RemoveContainer" containerID="f0fff64361a841fe1a960a7a88ccb1739b2e15220bb487e2130b7d135e0156ee" Jan 24 08:02:51 crc kubenswrapper[4705]: I0124 08:02:51.267937 4705 scope.go:117] "RemoveContainer" containerID="3dedea5bf3b6a790031e494086028fa3cdddcbc35fc4b053b113dd0e42b5ed47" Jan 24 08:02:51 crc kubenswrapper[4705]: I0124 08:02:51.295495 4705 generic.go:334] "Generic (PLEG): container finished" podID="266a132c-d822-44b5-a75c-e359c65c78ea" containerID="dea2276ce0890634faad2f01c08bc3abb5132cd1425bd3cf9af51cd32d6af9a9" exitCode=0 Jan 24 08:02:51 crc kubenswrapper[4705]: I0124 08:02:51.295618 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b96594b8-9jfp2" event={"ID":"266a132c-d822-44b5-a75c-e359c65c78ea","Type":"ContainerDied","Data":"dea2276ce0890634faad2f01c08bc3abb5132cd1425bd3cf9af51cd32d6af9a9"} Jan 24 08:02:51 crc kubenswrapper[4705]: I0124 08:02:51.548486 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6bbd698cdd-bj25j"] Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.004453 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67559d7f8-s8rzr"] Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.053094 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.098808 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.201762 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-config\") pod \"266a132c-d822-44b5-a75c-e359c65c78ea\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.202004 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-ovndb-tls-certs\") pod \"266a132c-d822-44b5-a75c-e359c65c78ea\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.202164 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-httpd-config\") pod \"266a132c-d822-44b5-a75c-e359c65c78ea\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.202359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-combined-ca-bundle\") pod \"266a132c-d822-44b5-a75c-e359c65c78ea\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.202423 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj9nx\" (UniqueName: \"kubernetes.io/projected/266a132c-d822-44b5-a75c-e359c65c78ea-kube-api-access-rj9nx\") pod \"266a132c-d822-44b5-a75c-e359c65c78ea\" (UID: \"266a132c-d822-44b5-a75c-e359c65c78ea\") " Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.219177 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266a132c-d822-44b5-a75c-e359c65c78ea-kube-api-access-rj9nx" (OuterVolumeSpecName: "kube-api-access-rj9nx") pod "266a132c-d822-44b5-a75c-e359c65c78ea" (UID: "266a132c-d822-44b5-a75c-e359c65c78ea"). InnerVolumeSpecName "kube-api-access-rj9nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.226998 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "266a132c-d822-44b5-a75c-e359c65c78ea" (UID: "266a132c-d822-44b5-a75c-e359c65c78ea"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.312793 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj9nx\" (UniqueName: \"kubernetes.io/projected/266a132c-d822-44b5-a75c-e359c65c78ea-kube-api-access-rj9nx\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.312868 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.354317 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerStarted","Data":"49429341439b6d330b9a7ca8de7c064617feaabfb540f2ff6b3bedf7d4877861"} Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.367151 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bbd698cdd-bj25j" event={"ID":"bb805121-ae65-457e-877f-2db0ae5e61dc","Type":"ContainerStarted","Data":"827270ba18596ecd63b5e8f968707a42cee87bc38db5970448166c99a373e09e"} Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.369313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bcb574654-nws5h" event={"ID":"c8875877-e577-4bf1-a74b-60931b57036c","Type":"ContainerStarted","Data":"b40ee7791c5ae8e74141ea81142f6330c572c65753f8d763ce98fefdc19e930a"} Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.369552 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7bcb574654-nws5h" podUID="c8875877-e577-4bf1-a74b-60931b57036c" containerName="heat-api" containerID="cri-o://b40ee7791c5ae8e74141ea81142f6330c572c65753f8d763ce98fefdc19e930a" gracePeriod=60 Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.370031 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.381934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67559d7f8-s8rzr" event={"ID":"13afed15-05ec-4ae6-a29f-5c9226770a19","Type":"ContainerStarted","Data":"fb50005161fb0143e1904e8e185e9044062d29609838e0630dea4f18d01a368c"} Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.394088 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7bcb574654-nws5h" podStartSLOduration=7.411120377 podStartE2EDuration="13.394062565s" podCreationTimestamp="2026-01-24 08:02:39 +0000 UTC" firstStartedPulling="2026-01-24 08:02:44.725117395 +0000 UTC m=+1303.444990683" lastFinishedPulling="2026-01-24 08:02:50.708059583 +0000 UTC m=+1309.427932871" observedRunningTime="2026-01-24 08:02:52.388400697 +0000 UTC m=+1311.108273985" watchObservedRunningTime="2026-01-24 08:02:52.394062565 +0000 UTC m=+1311.113935853" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.398103 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b96594b8-9jfp2" event={"ID":"266a132c-d822-44b5-a75c-e359c65c78ea","Type":"ContainerDied","Data":"69f4ad74289397834f86675e68a1f3904003531ab17cd975b1755b37d9179b38"} Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.398196 4705 scope.go:117] "RemoveContainer" containerID="46fb9222c6415345ec7a8700123b928768dcdad6f528b20811dbe265f3c214b2" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.398435 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b96594b8-9jfp2" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.414134 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f6475444-p69rq" event={"ID":"fffaad63-99ef-4990-957b-3433d20528c2","Type":"ContainerStarted","Data":"9286667a562820f81b5366e4ee6cf488f629fae3471465d3c161cfc5d7835f85"} Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.414738 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.414941 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7f6475444-p69rq" podUID="fffaad63-99ef-4990-957b-3433d20528c2" containerName="heat-cfnapi" containerID="cri-o://9286667a562820f81b5366e4ee6cf488f629fae3471465d3c161cfc5d7835f85" gracePeriod=60 Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.441433 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f6475444-p69rq" podStartSLOduration=7.193376757 podStartE2EDuration="13.4414146s" podCreationTimestamp="2026-01-24 08:02:39 +0000 UTC" firstStartedPulling="2026-01-24 08:02:44.7385173 +0000 UTC m=+1303.458390598" lastFinishedPulling="2026-01-24 08:02:50.986555153 +0000 UTC m=+1309.706428441" observedRunningTime="2026-01-24 08:02:52.439408394 +0000 UTC m=+1311.159281682" watchObservedRunningTime="2026-01-24 08:02:52.4414146 +0000 UTC m=+1311.161287888" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.458225 4705 scope.go:117] "RemoveContainer" containerID="dea2276ce0890634faad2f01c08bc3abb5132cd1425bd3cf9af51cd32d6af9a9" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.460854 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "266a132c-d822-44b5-a75c-e359c65c78ea" (UID: "266a132c-d822-44b5-a75c-e359c65c78ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.520085 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.550401 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-config" (OuterVolumeSpecName: "config") pod "266a132c-d822-44b5-a75c-e359c65c78ea" (UID: "266a132c-d822-44b5-a75c-e359c65c78ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.627674 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.644741 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "266a132c-d822-44b5-a75c-e359c65c78ea" (UID: "266a132c-d822-44b5-a75c-e359c65c78ea"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.733183 4705 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/266a132c-d822-44b5-a75c-e359c65c78ea-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.769345 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77b96594b8-9jfp2"] Jan 24 08:02:52 crc kubenswrapper[4705]: I0124 08:02:52.780031 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77b96594b8-9jfp2"] Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.488625 4705 generic.go:334] "Generic (PLEG): container finished" podID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerID="de8605ca3fbbf688859ce9efb9fb95a09b331dc1f2b05830141b2e8f285dfff1" exitCode=1 Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.489096 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77b89ffb5c-ntbn2" event={"ID":"8fc5b3f3-290d-493a-921c-b114b7c2fd98","Type":"ContainerDied","Data":"de8605ca3fbbf688859ce9efb9fb95a09b331dc1f2b05830141b2e8f285dfff1"} Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.541375 4705 generic.go:334] "Generic (PLEG): container finished" podID="fffaad63-99ef-4990-957b-3433d20528c2" containerID="9286667a562820f81b5366e4ee6cf488f629fae3471465d3c161cfc5d7835f85" exitCode=0 Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.541515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f6475444-p69rq" event={"ID":"fffaad63-99ef-4990-957b-3433d20528c2","Type":"ContainerDied","Data":"9286667a562820f81b5366e4ee6cf488f629fae3471465d3c161cfc5d7835f85"} Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.575488 4705 scope.go:117] "RemoveContainer" containerID="de8605ca3fbbf688859ce9efb9fb95a09b331dc1f2b05830141b2e8f285dfff1" Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.582222 4705 generic.go:334] "Generic (PLEG): container finished" podID="699f900a-a29f-4b4b-b38a-102f8d440596" containerID="6a81fa69cb222ce6fdacc22fd7202ade270ceeb17be910618fe695aba0ded6f2" exitCode=1 Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.651030 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8875877-e577-4bf1-a74b-60931b57036c" containerID="b40ee7791c5ae8e74141ea81142f6330c572c65753f8d763ce98fefdc19e930a" exitCode=0 Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.747725 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6bbd698cdd-bj25j" podStartSLOduration=4.74770338 podStartE2EDuration="4.74770338s" podCreationTimestamp="2026-01-24 08:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:53.698790481 +0000 UTC m=+1312.418663779" watchObservedRunningTime="2026-01-24 08:02:53.74770338 +0000 UTC m=+1312.467576668" Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.751900 4705 scope.go:117] "RemoveContainer" containerID="6a81fa69cb222ce6fdacc22fd7202ade270ceeb17be910618fe695aba0ded6f2" Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.752961 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-67559d7f8-s8rzr" podStartSLOduration=4.752944106 podStartE2EDuration="4.752944106s" podCreationTimestamp="2026-01-24 08:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:02:53.724644424 +0000 UTC m=+1312.444517712" watchObservedRunningTime="2026-01-24 08:02:53.752944106 +0000 UTC m=+1312.472817394" Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.987252 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" path="/var/lib/kubelet/pods/266a132c-d822-44b5-a75c-e359c65c78ea/volumes" Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.988219 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" event={"ID":"699f900a-a29f-4b4b-b38a-102f8d440596","Type":"ContainerDied","Data":"6a81fa69cb222ce6fdacc22fd7202ade270ceeb17be910618fe695aba0ded6f2"} Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.988251 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bbd698cdd-bj25j" event={"ID":"bb805121-ae65-457e-877f-2db0ae5e61dc","Type":"ContainerStarted","Data":"d9a846567e0453bb62e0615b06107c36ea65f3a258c4dd648d7c82ed3ebddef7"} Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.988278 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.988321 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bcb574654-nws5h" event={"ID":"c8875877-e577-4bf1-a74b-60931b57036c","Type":"ContainerDied","Data":"b40ee7791c5ae8e74141ea81142f6330c572c65753f8d763ce98fefdc19e930a"} Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.988341 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:02:53 crc kubenswrapper[4705]: I0124 08:02:53.988355 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67559d7f8-s8rzr" event={"ID":"13afed15-05ec-4ae6-a29f-5c9226770a19","Type":"ContainerStarted","Data":"83f030eb249ef205a58f76f13dc4694a9a4009269712b5782bcebf7090fdf318"} Jan 24 08:02:54 crc kubenswrapper[4705]: E0124 08:02:54.055584 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8875877_e577_4bf1_a74b_60931b57036c.slice/crio-conmon-b40ee7791c5ae8e74141ea81142f6330c572c65753f8d763ce98fefdc19e930a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfffaad63_99ef_4990_957b_3433d20528c2.slice/crio-9286667a562820f81b5366e4ee6cf488f629fae3471465d3c161cfc5d7835f85.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fc5b3f3_290d_493a_921c_b114b7c2fd98.slice/crio-conmon-de8605ca3fbbf688859ce9efb9fb95a09b331dc1f2b05830141b2e8f285dfff1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8875877_e577_4bf1_a74b_60931b57036c.slice/crio-b40ee7791c5ae8e74141ea81142f6330c572c65753f8d763ce98fefdc19e930a.scope\": RecentStats: unable to find data in memory cache]" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.085785 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.122308 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.222868 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn628\" (UniqueName: \"kubernetes.io/projected/fffaad63-99ef-4990-957b-3433d20528c2-kube-api-access-bn628\") pod \"fffaad63-99ef-4990-957b-3433d20528c2\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.223035 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data\") pod \"fffaad63-99ef-4990-957b-3433d20528c2\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.223094 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-combined-ca-bundle\") pod \"fffaad63-99ef-4990-957b-3433d20528c2\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.223175 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-combined-ca-bundle\") pod \"c8875877-e577-4bf1-a74b-60931b57036c\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.223219 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data-custom\") pod \"fffaad63-99ef-4990-957b-3433d20528c2\" (UID: \"fffaad63-99ef-4990-957b-3433d20528c2\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.223348 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd749\" (UniqueName: \"kubernetes.io/projected/c8875877-e577-4bf1-a74b-60931b57036c-kube-api-access-zd749\") pod \"c8875877-e577-4bf1-a74b-60931b57036c\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.223390 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data\") pod \"c8875877-e577-4bf1-a74b-60931b57036c\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.223435 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data-custom\") pod \"c8875877-e577-4bf1-a74b-60931b57036c\" (UID: \"c8875877-e577-4bf1-a74b-60931b57036c\") " Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.237094 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8875877-e577-4bf1-a74b-60931b57036c" (UID: "c8875877-e577-4bf1-a74b-60931b57036c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.237546 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fffaad63-99ef-4990-957b-3433d20528c2-kube-api-access-bn628" (OuterVolumeSpecName: "kube-api-access-bn628") pod "fffaad63-99ef-4990-957b-3433d20528c2" (UID: "fffaad63-99ef-4990-957b-3433d20528c2"). InnerVolumeSpecName "kube-api-access-bn628". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.244080 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fffaad63-99ef-4990-957b-3433d20528c2" (UID: "fffaad63-99ef-4990-957b-3433d20528c2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.262258 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8875877-e577-4bf1-a74b-60931b57036c-kube-api-access-zd749" (OuterVolumeSpecName: "kube-api-access-zd749") pod "c8875877-e577-4bf1-a74b-60931b57036c" (UID: "c8875877-e577-4bf1-a74b-60931b57036c"). InnerVolumeSpecName "kube-api-access-zd749". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.270594 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8875877-e577-4bf1-a74b-60931b57036c" (UID: "c8875877-e577-4bf1-a74b-60931b57036c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.312138 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data" (OuterVolumeSpecName: "config-data") pod "fffaad63-99ef-4990-957b-3433d20528c2" (UID: "fffaad63-99ef-4990-957b-3433d20528c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.325807 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data" (OuterVolumeSpecName: "config-data") pod "c8875877-e577-4bf1-a74b-60931b57036c" (UID: "c8875877-e577-4bf1-a74b-60931b57036c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.325871 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.325913 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn628\" (UniqueName: \"kubernetes.io/projected/fffaad63-99ef-4990-957b-3433d20528c2-kube-api-access-bn628\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.325959 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.325972 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.325984 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.325995 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd749\" (UniqueName: \"kubernetes.io/projected/c8875877-e577-4bf1-a74b-60931b57036c-kube-api-access-zd749\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.379584 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fffaad63-99ef-4990-957b-3433d20528c2" (UID: "fffaad63-99ef-4990-957b-3433d20528c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.428103 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffaad63-99ef-4990-957b-3433d20528c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.428156 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8875877-e577-4bf1-a74b-60931b57036c-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.577987 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.665998 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gsn9w"] Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.666287 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" podUID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerName="dnsmasq-dns" containerID="cri-o://d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567" gracePeriod=10 Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.742871 4705 generic.go:334] "Generic (PLEG): container finished" podID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerID="f37bc30758c37c6ec67b6430424ec69f79a795fed0011d5abdad72050191cccb" exitCode=1 Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.743308 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77b89ffb5c-ntbn2" event={"ID":"8fc5b3f3-290d-493a-921c-b114b7c2fd98","Type":"ContainerDied","Data":"f37bc30758c37c6ec67b6430424ec69f79a795fed0011d5abdad72050191cccb"} Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.743372 4705 scope.go:117] "RemoveContainer" containerID="de8605ca3fbbf688859ce9efb9fb95a09b331dc1f2b05830141b2e8f285dfff1" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.743678 4705 scope.go:117] "RemoveContainer" containerID="f37bc30758c37c6ec67b6430424ec69f79a795fed0011d5abdad72050191cccb" Jan 24 08:02:54 crc kubenswrapper[4705]: E0124 08:02:54.743966 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-77b89ffb5c-ntbn2_openstack(8fc5b3f3-290d-493a-921c-b114b7c2fd98)\"" pod="openstack/heat-api-77b89ffb5c-ntbn2" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.759909 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f6475444-p69rq" event={"ID":"fffaad63-99ef-4990-957b-3433d20528c2","Type":"ContainerDied","Data":"dc2aab55633cc725e3bc65b8678618f585a872aae38411d66055631fd7c0cc85"} Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.760026 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f6475444-p69rq" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.773923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerStarted","Data":"117157bc71177979938c47b23953388339e81f185bf62f235ed5caa6f63abd4e"} Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.779133 4705 generic.go:334] "Generic (PLEG): container finished" podID="699f900a-a29f-4b4b-b38a-102f8d440596" containerID="dceb25f0b85589f81a6b2a64b8eec359431edf8775311097059fe4867ac163b8" exitCode=1 Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.779241 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" event={"ID":"699f900a-a29f-4b4b-b38a-102f8d440596","Type":"ContainerDied","Data":"dceb25f0b85589f81a6b2a64b8eec359431edf8775311097059fe4867ac163b8"} Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.779929 4705 scope.go:117] "RemoveContainer" containerID="dceb25f0b85589f81a6b2a64b8eec359431edf8775311097059fe4867ac163b8" Jan 24 08:02:54 crc kubenswrapper[4705]: E0124 08:02:54.780181 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b8dc4cdfc-hwv79_openstack(699f900a-a29f-4b4b-b38a-102f8d440596)\"" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.804035 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bcb574654-nws5h" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.804280 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bcb574654-nws5h" event={"ID":"c8875877-e577-4bf1-a74b-60931b57036c","Type":"ContainerDied","Data":"4c2b255eb21c11263993b300ea7a06843bd242b574de60b8f94aa0aa8168ba43"} Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.887303 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f6475444-p69rq"] Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.916429 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7f6475444-p69rq"] Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.922087 4705 scope.go:117] "RemoveContainer" containerID="9286667a562820f81b5366e4ee6cf488f629fae3471465d3c161cfc5d7835f85" Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.934485 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7bcb574654-nws5h"] Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.955296 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7bcb574654-nws5h"] Jan 24 08:02:54 crc kubenswrapper[4705]: I0124 08:02:54.960556 4705 scope.go:117] "RemoveContainer" containerID="6a81fa69cb222ce6fdacc22fd7202ade270ceeb17be910618fe695aba0ded6f2" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.071125 4705 scope.go:117] "RemoveContainer" containerID="b40ee7791c5ae8e74141ea81142f6330c572c65753f8d763ce98fefdc19e930a" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.344280 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.451586 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-svc\") pod \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.451644 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-swift-storage-0\") pod \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.451708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-nb\") pod \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.451775 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-config\") pod \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.451815 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-sb\") pod \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.451891 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29hmr\" (UniqueName: \"kubernetes.io/projected/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-kube-api-access-29hmr\") pod \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\" (UID: \"0ebc3649-e2d4-4769-9a0e-4ba52f06f077\") " Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.465140 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-kube-api-access-29hmr" (OuterVolumeSpecName: "kube-api-access-29hmr") pod "0ebc3649-e2d4-4769-9a0e-4ba52f06f077" (UID: "0ebc3649-e2d4-4769-9a0e-4ba52f06f077"). InnerVolumeSpecName "kube-api-access-29hmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.515718 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ebc3649-e2d4-4769-9a0e-4ba52f06f077" (UID: "0ebc3649-e2d4-4769-9a0e-4ba52f06f077"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.527346 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ebc3649-e2d4-4769-9a0e-4ba52f06f077" (UID: "0ebc3649-e2d4-4769-9a0e-4ba52f06f077"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.536129 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ebc3649-e2d4-4769-9a0e-4ba52f06f077" (UID: "0ebc3649-e2d4-4769-9a0e-4ba52f06f077"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.536188 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-config" (OuterVolumeSpecName: "config") pod "0ebc3649-e2d4-4769-9a0e-4ba52f06f077" (UID: "0ebc3649-e2d4-4769-9a0e-4ba52f06f077"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.548022 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ebc3649-e2d4-4769-9a0e-4ba52f06f077" (UID: "0ebc3649-e2d4-4769-9a0e-4ba52f06f077"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.554047 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.554088 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.554098 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.554108 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29hmr\" (UniqueName: \"kubernetes.io/projected/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-kube-api-access-29hmr\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.554119 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.554128 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebc3649-e2d4-4769-9a0e-4ba52f06f077-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.589573 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8875877-e577-4bf1-a74b-60931b57036c" path="/var/lib/kubelet/pods/c8875877-e577-4bf1-a74b-60931b57036c/volumes" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.590177 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fffaad63-99ef-4990-957b-3433d20528c2" path="/var/lib/kubelet/pods/fffaad63-99ef-4990-957b-3433d20528c2/volumes" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.814462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerStarted","Data":"bbfab8c694233fdc9d54b76b938cefb811ba313a1b1eb4bfcf3e45e9a6eed435"} Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.816868 4705 scope.go:117] "RemoveContainer" containerID="dceb25f0b85589f81a6b2a64b8eec359431edf8775311097059fe4867ac163b8" Jan 24 08:02:55 crc kubenswrapper[4705]: E0124 08:02:55.817150 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b8dc4cdfc-hwv79_openstack(699f900a-a29f-4b4b-b38a-102f8d440596)\"" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.820127 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerID="d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567" exitCode=0 Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.820229 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" event={"ID":"0ebc3649-e2d4-4769-9a0e-4ba52f06f077","Type":"ContainerDied","Data":"d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567"} Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.820312 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" event={"ID":"0ebc3649-e2d4-4769-9a0e-4ba52f06f077","Type":"ContainerDied","Data":"66c127678c34410bb8fe70b868531bd419e02bb183158a118e3a52210d0c6d46"} Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.820339 4705 scope.go:117] "RemoveContainer" containerID="d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.820436 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gsn9w" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.824161 4705 scope.go:117] "RemoveContainer" containerID="f37bc30758c37c6ec67b6430424ec69f79a795fed0011d5abdad72050191cccb" Jan 24 08:02:55 crc kubenswrapper[4705]: E0124 08:02:55.824378 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-77b89ffb5c-ntbn2_openstack(8fc5b3f3-290d-493a-921c-b114b7c2fd98)\"" pod="openstack/heat-api-77b89ffb5c-ntbn2" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.848888 4705 scope.go:117] "RemoveContainer" containerID="171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.876612 4705 scope.go:117] "RemoveContainer" containerID="d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567" Jan 24 08:02:55 crc kubenswrapper[4705]: E0124 08:02:55.880249 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567\": container with ID starting with d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567 not found: ID does not exist" containerID="d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.880300 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567"} err="failed to get container status \"d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567\": rpc error: code = NotFound desc = could not find container \"d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567\": container with ID starting with d031c09ef630a37660a5db6fffd3941fb71ae9d065d2a4117a67a9dd2c640567 not found: ID does not exist" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.880330 4705 scope.go:117] "RemoveContainer" containerID="171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf" Jan 24 08:02:55 crc kubenswrapper[4705]: E0124 08:02:55.884186 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf\": container with ID starting with 171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf not found: ID does not exist" containerID="171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.884233 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf"} err="failed to get container status \"171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf\": rpc error: code = NotFound desc = could not find container \"171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf\": container with ID starting with 171024bfa06b34f5ac6b90bb4d081e5fed7d4ab0ddb109a815cd5ebab673bacf not found: ID does not exist" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.886796 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gsn9w"] Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.903600 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gsn9w"] Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.936896 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.936970 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.947234 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:55 crc kubenswrapper[4705]: I0124 08:02:55.947308 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:02:56 crc kubenswrapper[4705]: I0124 08:02:56.839466 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerStarted","Data":"236ebae6c3383d9557e8018837e25830d73aaeaef7733e215d3f35920c19e2b5"} Jan 24 08:02:56 crc kubenswrapper[4705]: I0124 08:02:56.842369 4705 scope.go:117] "RemoveContainer" containerID="dceb25f0b85589f81a6b2a64b8eec359431edf8775311097059fe4867ac163b8" Jan 24 08:02:56 crc kubenswrapper[4705]: E0124 08:02:56.842646 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b8dc4cdfc-hwv79_openstack(699f900a-a29f-4b4b-b38a-102f8d440596)\"" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" Jan 24 08:02:56 crc kubenswrapper[4705]: I0124 08:02:56.842911 4705 scope.go:117] "RemoveContainer" containerID="f37bc30758c37c6ec67b6430424ec69f79a795fed0011d5abdad72050191cccb" Jan 24 08:02:56 crc kubenswrapper[4705]: E0124 08:02:56.843175 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-77b89ffb5c-ntbn2_openstack(8fc5b3f3-290d-493a-921c-b114b7c2fd98)\"" pod="openstack/heat-api-77b89ffb5c-ntbn2" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" Jan 24 08:02:57 crc kubenswrapper[4705]: I0124 08:02:57.589185 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" path="/var/lib/kubelet/pods/0ebc3649-e2d4-4769-9a0e-4ba52f06f077/volumes" Jan 24 08:02:57 crc kubenswrapper[4705]: I0124 08:02:57.854798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerStarted","Data":"660db2ffdc230fb53e155a5e9ca2ecfec9529872b4d33ff525ffe1c43cd51e9e"} Jan 24 08:02:57 crc kubenswrapper[4705]: I0124 08:02:57.855415 4705 scope.go:117] "RemoveContainer" containerID="dceb25f0b85589f81a6b2a64b8eec359431edf8775311097059fe4867ac163b8" Jan 24 08:02:57 crc kubenswrapper[4705]: E0124 08:02:57.855728 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b8dc4cdfc-hwv79_openstack(699f900a-a29f-4b4b-b38a-102f8d440596)\"" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" Jan 24 08:02:57 crc kubenswrapper[4705]: I0124 08:02:57.856254 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:02:57 crc kubenswrapper[4705]: I0124 08:02:57.856651 4705 scope.go:117] "RemoveContainer" containerID="f37bc30758c37c6ec67b6430424ec69f79a795fed0011d5abdad72050191cccb" Jan 24 08:02:57 crc kubenswrapper[4705]: E0124 08:02:57.856910 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-77b89ffb5c-ntbn2_openstack(8fc5b3f3-290d-493a-921c-b114b7c2fd98)\"" pod="openstack/heat-api-77b89ffb5c-ntbn2" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" Jan 24 08:02:57 crc kubenswrapper[4705]: I0124 08:02:57.905141 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.739111978 podStartE2EDuration="9.905107193s" podCreationTimestamp="2026-01-24 08:02:48 +0000 UTC" firstStartedPulling="2026-01-24 08:02:52.091532103 +0000 UTC m=+1310.811405391" lastFinishedPulling="2026-01-24 08:02:57.257527318 +0000 UTC m=+1315.977400606" observedRunningTime="2026-01-24 08:02:57.878331494 +0000 UTC m=+1316.598204812" watchObservedRunningTime="2026-01-24 08:02:57.905107193 +0000 UTC m=+1316.624980481" Jan 24 08:02:58 crc kubenswrapper[4705]: I0124 08:02:58.394176 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:02:58 crc kubenswrapper[4705]: I0124 08:02:58.394446 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-log" containerID="cri-o://b674941e119859e5e54ce45d0088310944f038d162d1e1382fc6e86307f7087b" gracePeriod=30 Jan 24 08:02:58 crc kubenswrapper[4705]: I0124 08:02:58.394534 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-httpd" containerID="cri-o://a0a4d36097fa0829035badbb6b222043f52ffb9ea2ba7bc665f21f77dfe06563" gracePeriod=30 Jan 24 08:02:58 crc kubenswrapper[4705]: I0124 08:02:58.874230 4705 generic.go:334] "Generic (PLEG): container finished" podID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerID="b674941e119859e5e54ce45d0088310944f038d162d1e1382fc6e86307f7087b" exitCode=143 Jan 24 08:02:58 crc kubenswrapper[4705]: I0124 08:02:58.874337 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c44b4bd9-f2df-4ca0-8214-904cd04b10e8","Type":"ContainerDied","Data":"b674941e119859e5e54ce45d0088310944f038d162d1e1382fc6e86307f7087b"} Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.009918 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.332875 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.850515 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pwc24"] Jan 24 08:02:59 crc kubenswrapper[4705]: E0124 08:02:59.851326 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerName="dnsmasq-dns" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851363 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerName="dnsmasq-dns" Jan 24 08:02:59 crc kubenswrapper[4705]: E0124 08:02:59.851377 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8875877-e577-4bf1-a74b-60931b57036c" containerName="heat-api" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851387 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8875877-e577-4bf1-a74b-60931b57036c" containerName="heat-api" Jan 24 08:02:59 crc kubenswrapper[4705]: E0124 08:02:59.851417 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerName="init" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851427 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerName="init" Jan 24 08:02:59 crc kubenswrapper[4705]: E0124 08:02:59.851454 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-api" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851464 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-api" Jan 24 08:02:59 crc kubenswrapper[4705]: E0124 08:02:59.851476 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffaad63-99ef-4990-957b-3433d20528c2" containerName="heat-cfnapi" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851486 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffaad63-99ef-4990-957b-3433d20528c2" containerName="heat-cfnapi" Jan 24 08:02:59 crc kubenswrapper[4705]: E0124 08:02:59.851514 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-httpd" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851522 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-httpd" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851756 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebc3649-e2d4-4769-9a0e-4ba52f06f077" containerName="dnsmasq-dns" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851779 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8875877-e577-4bf1-a74b-60931b57036c" containerName="heat-api" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851794 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-httpd" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851805 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="266a132c-d822-44b5-a75c-e359c65c78ea" containerName="neutron-api" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.851848 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fffaad63-99ef-4990-957b-3433d20528c2" containerName="heat-cfnapi" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.860174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwc24" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.875722 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pwc24"] Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.959790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c02cc76-2478-43fb-a558-70fa32999210-operator-scripts\") pod \"nova-api-db-create-pwc24\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " pod="openstack/nova-api-db-create-pwc24" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.959871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjqj\" (UniqueName: \"kubernetes.io/projected/9c02cc76-2478-43fb-a558-70fa32999210-kube-api-access-jfjqj\") pod \"nova-api-db-create-pwc24\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " pod="openstack/nova-api-db-create-pwc24" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.972608 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tnl2q"] Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.973750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:02:59 crc kubenswrapper[4705]: I0124 08:02:59.996224 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tnl2q"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.061863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb2778d-90da-49fd-ba99-d1d32efd55c4-operator-scripts\") pod \"nova-cell0-db-create-tnl2q\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.061941 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znhr\" (UniqueName: \"kubernetes.io/projected/cbb2778d-90da-49fd-ba99-d1d32efd55c4-kube-api-access-2znhr\") pod \"nova-cell0-db-create-tnl2q\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.062013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c02cc76-2478-43fb-a558-70fa32999210-operator-scripts\") pod \"nova-api-db-create-pwc24\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " pod="openstack/nova-api-db-create-pwc24" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.062069 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjqj\" (UniqueName: \"kubernetes.io/projected/9c02cc76-2478-43fb-a558-70fa32999210-kube-api-access-jfjqj\") pod \"nova-api-db-create-pwc24\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " pod="openstack/nova-api-db-create-pwc24" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.063470 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c02cc76-2478-43fb-a558-70fa32999210-operator-scripts\") pod \"nova-api-db-create-pwc24\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " pod="openstack/nova-api-db-create-pwc24" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.094412 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjqj\" (UniqueName: \"kubernetes.io/projected/9c02cc76-2478-43fb-a558-70fa32999210-kube-api-access-jfjqj\") pod \"nova-api-db-create-pwc24\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " pod="openstack/nova-api-db-create-pwc24" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.166328 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb2778d-90da-49fd-ba99-d1d32efd55c4-operator-scripts\") pod \"nova-cell0-db-create-tnl2q\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.166415 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2znhr\" (UniqueName: \"kubernetes.io/projected/cbb2778d-90da-49fd-ba99-d1d32efd55c4-kube-api-access-2znhr\") pod \"nova-cell0-db-create-tnl2q\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.167465 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb2778d-90da-49fd-ba99-d1d32efd55c4-operator-scripts\") pod \"nova-cell0-db-create-tnl2q\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.199838 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwc24" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.209670 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bh2tb"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.211204 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.225801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2znhr\" (UniqueName: \"kubernetes.io/projected/cbb2778d-90da-49fd-ba99-d1d32efd55c4-kube-api-access-2znhr\") pod \"nova-cell0-db-create-tnl2q\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.238927 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bh2tb"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.264884 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5829-account-create-update-wq6kb"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.266269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.277411 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.291901 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5829-account-create-update-wq6kb"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.337801 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.372378 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srxw\" (UniqueName: \"kubernetes.io/projected/668dd17c-3a2a-48b7-967a-a06a2b9a1192-kube-api-access-4srxw\") pod \"nova-cell1-db-create-bh2tb\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.378194 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-operator-scripts\") pod \"nova-api-5829-account-create-update-wq6kb\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.378251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668dd17c-3a2a-48b7-967a-a06a2b9a1192-operator-scripts\") pod \"nova-cell1-db-create-bh2tb\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.378495 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkzm\" (UniqueName: \"kubernetes.io/projected/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-kube-api-access-xmkzm\") pod \"nova-api-5829-account-create-update-wq6kb\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.541761 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4srxw\" (UniqueName: \"kubernetes.io/projected/668dd17c-3a2a-48b7-967a-a06a2b9a1192-kube-api-access-4srxw\") pod \"nova-cell1-db-create-bh2tb\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.541898 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-operator-scripts\") pod \"nova-api-5829-account-create-update-wq6kb\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.541954 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668dd17c-3a2a-48b7-967a-a06a2b9a1192-operator-scripts\") pod \"nova-cell1-db-create-bh2tb\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.542083 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkzm\" (UniqueName: \"kubernetes.io/projected/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-kube-api-access-xmkzm\") pod \"nova-api-5829-account-create-update-wq6kb\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.543192 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-operator-scripts\") pod \"nova-api-5829-account-create-update-wq6kb\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.543216 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668dd17c-3a2a-48b7-967a-a06a2b9a1192-operator-scripts\") pod \"nova-cell1-db-create-bh2tb\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.579458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4srxw\" (UniqueName: \"kubernetes.io/projected/668dd17c-3a2a-48b7-967a-a06a2b9a1192-kube-api-access-4srxw\") pod \"nova-cell1-db-create-bh2tb\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.592919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkzm\" (UniqueName: \"kubernetes.io/projected/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-kube-api-access-xmkzm\") pod \"nova-api-5829-account-create-update-wq6kb\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.604297 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.605639 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d619-account-create-update-t2827"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.607284 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.624324 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.631407 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d619-account-create-update-t2827"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.653278 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qr46\" (UniqueName: \"kubernetes.io/projected/fe428932-5a60-4f4d-baf2-a970bd94e6e4-kube-api-access-8qr46\") pod \"nova-cell0-d619-account-create-update-t2827\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.653356 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe428932-5a60-4f4d-baf2-a970bd94e6e4-operator-scripts\") pod \"nova-cell0-d619-account-create-update-t2827\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.706878 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.738757 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8111-account-create-update-29vdk"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.740385 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.747256 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.757296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qr46\" (UniqueName: \"kubernetes.io/projected/fe428932-5a60-4f4d-baf2-a970bd94e6e4-kube-api-access-8qr46\") pod \"nova-cell0-d619-account-create-update-t2827\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.757363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe428932-5a60-4f4d-baf2-a970bd94e6e4-operator-scripts\") pod \"nova-cell0-d619-account-create-update-t2827\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.761951 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe428932-5a60-4f4d-baf2-a970bd94e6e4-operator-scripts\") pod \"nova-cell0-d619-account-create-update-t2827\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.805877 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qr46\" (UniqueName: \"kubernetes.io/projected/fe428932-5a60-4f4d-baf2-a970bd94e6e4-kube-api-access-8qr46\") pod \"nova-cell0-d619-account-create-update-t2827\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.821316 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8111-account-create-update-29vdk"] Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.862850 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599801a-a9fe-4efe-9f2d-5af2507a76ed-operator-scripts\") pod \"nova-cell1-8111-account-create-update-29vdk\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.863006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7h2\" (UniqueName: \"kubernetes.io/projected/3599801a-a9fe-4efe-9f2d-5af2507a76ed-kube-api-access-tx7h2\") pod \"nova-cell1-8111-account-create-update-29vdk\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.923945 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-central-agent" containerID="cri-o://117157bc71177979938c47b23953388339e81f185bf62f235ed5caa6f63abd4e" gracePeriod=30 Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.924038 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="proxy-httpd" containerID="cri-o://660db2ffdc230fb53e155a5e9ca2ecfec9529872b4d33ff525ffe1c43cd51e9e" gracePeriod=30 Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.924117 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="sg-core" containerID="cri-o://236ebae6c3383d9557e8018837e25830d73aaeaef7733e215d3f35920c19e2b5" gracePeriod=30 Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.924132 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-notification-agent" containerID="cri-o://bbfab8c694233fdc9d54b76b938cefb811ba313a1b1eb4bfcf3e45e9a6eed435" gracePeriod=30 Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.968114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599801a-a9fe-4efe-9f2d-5af2507a76ed-operator-scripts\") pod \"nova-cell1-8111-account-create-update-29vdk\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.968188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7h2\" (UniqueName: \"kubernetes.io/projected/3599801a-a9fe-4efe-9f2d-5af2507a76ed-kube-api-access-tx7h2\") pod \"nova-cell1-8111-account-create-update-29vdk\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:00 crc kubenswrapper[4705]: I0124 08:03:00.969064 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599801a-a9fe-4efe-9f2d-5af2507a76ed-operator-scripts\") pod \"nova-cell1-8111-account-create-update-29vdk\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.005212 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7h2\" (UniqueName: \"kubernetes.io/projected/3599801a-a9fe-4efe-9f2d-5af2507a76ed-kube-api-access-tx7h2\") pod \"nova-cell1-8111-account-create-update-29vdk\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.005244 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.178894 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.191760 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pwc24"] Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.207246 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tnl2q"] Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.528867 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5829-account-create-update-wq6kb"] Jan 24 08:03:01 crc kubenswrapper[4705]: W0124 08:03:01.579006 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41ea1a4_fd22_4665_83e1_6e06bae9aeb5.slice/crio-e76d28eff396dd18bf5fa63404b6ef239789488b691cf4d6af8db750ee7b1706 WatchSource:0}: Error finding container e76d28eff396dd18bf5fa63404b6ef239789488b691cf4d6af8db750ee7b1706: Status 404 returned error can't find the container with id e76d28eff396dd18bf5fa63404b6ef239789488b691cf4d6af8db750ee7b1706 Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.723397 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bh2tb"] Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.928466 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d619-account-create-update-t2827"] Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.946867 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pwc24" event={"ID":"9c02cc76-2478-43fb-a558-70fa32999210","Type":"ContainerStarted","Data":"5aa620c05e2ac2326fbb9a4f5041b5b52633ee350d990fed620e6ef75e1b0f39"} Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.949647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bh2tb" event={"ID":"668dd17c-3a2a-48b7-967a-a06a2b9a1192","Type":"ContainerStarted","Data":"002f936acc54c837162d1007f18fc7f0f585b8c56ced015a46998871087b57ed"} Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.951343 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.952594 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tnl2q" event={"ID":"cbb2778d-90da-49fd-ba99-d1d32efd55c4","Type":"ContainerStarted","Data":"7c53192bd5ca43f9d8c09510f15d882faac3771e7743ef50c75a0a79dd8503bf"} Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.957886 4705 generic.go:334] "Generic (PLEG): container finished" podID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerID="a0a4d36097fa0829035badbb6b222043f52ffb9ea2ba7bc665f21f77dfe06563" exitCode=0 Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.957969 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c44b4bd9-f2df-4ca0-8214-904cd04b10e8","Type":"ContainerDied","Data":"a0a4d36097fa0829035badbb6b222043f52ffb9ea2ba7bc665f21f77dfe06563"} Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.965812 4705 generic.go:334] "Generic (PLEG): container finished" podID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerID="660db2ffdc230fb53e155a5e9ca2ecfec9529872b4d33ff525ffe1c43cd51e9e" exitCode=0 Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.965850 4705 generic.go:334] "Generic (PLEG): container finished" podID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerID="236ebae6c3383d9557e8018837e25830d73aaeaef7733e215d3f35920c19e2b5" exitCode=2 Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.965901 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerDied","Data":"660db2ffdc230fb53e155a5e9ca2ecfec9529872b4d33ff525ffe1c43cd51e9e"} Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.965938 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerDied","Data":"236ebae6c3383d9557e8018837e25830d73aaeaef7733e215d3f35920c19e2b5"} Jan 24 08:03:01 crc kubenswrapper[4705]: I0124 08:03:01.968887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5829-account-create-update-wq6kb" event={"ID":"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5","Type":"ContainerStarted","Data":"e76d28eff396dd18bf5fa63404b6ef239789488b691cf4d6af8db750ee7b1706"} Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.083253 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8111-account-create-update-29vdk"] Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.095563 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.982794 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d619-account-create-update-t2827" event={"ID":"fe428932-5a60-4f4d-baf2-a970bd94e6e4","Type":"ContainerStarted","Data":"eddf4efead3992a5223fb81e2d53ae9ba52173730702b2404ee3bbe9f4e5f3ec"} Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.984699 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8111-account-create-update-29vdk" event={"ID":"3599801a-a9fe-4efe-9f2d-5af2507a76ed","Type":"ContainerStarted","Data":"1d747e35a2d65a3937e100e9f9a3dc3a01b0a5e21e7c9f857d34d23f1ef14bed"} Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.988450 4705 generic.go:334] "Generic (PLEG): container finished" podID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerID="bbfab8c694233fdc9d54b76b938cefb811ba313a1b1eb4bfcf3e45e9a6eed435" exitCode=0 Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.988488 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerDied","Data":"bbfab8c694233fdc9d54b76b938cefb811ba313a1b1eb4bfcf3e45e9a6eed435"} Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.990259 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-67559d7f8-s8rzr" Jan 24 08:03:02 crc kubenswrapper[4705]: I0124 08:03:02.990352 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6bbd698cdd-bj25j" Jan 24 08:03:03 crc kubenswrapper[4705]: I0124 08:03:03.080727 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77b89ffb5c-ntbn2"] Jan 24 08:03:03 crc kubenswrapper[4705]: I0124 08:03:03.123102 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b8dc4cdfc-hwv79"] Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.001520 4705 generic.go:334] "Generic (PLEG): container finished" podID="3599801a-a9fe-4efe-9f2d-5af2507a76ed" containerID="dcb7daf75c54f75e30e62afd505ec77cc8c079521a44eae4e9d57eca99552364" exitCode=0 Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.001692 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8111-account-create-update-29vdk" event={"ID":"3599801a-a9fe-4efe-9f2d-5af2507a76ed","Type":"ContainerDied","Data":"dcb7daf75c54f75e30e62afd505ec77cc8c079521a44eae4e9d57eca99552364"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.004320 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" event={"ID":"699f900a-a29f-4b4b-b38a-102f8d440596","Type":"ContainerDied","Data":"a2a4808df1765d8fe44bc5c51823ba237d9ac518b6a3bb956ac055d1ad7f11a2"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.004363 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a4808df1765d8fe44bc5c51823ba237d9ac518b6a3bb956ac055d1ad7f11a2" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.008509 4705 generic.go:334] "Generic (PLEG): container finished" podID="668dd17c-3a2a-48b7-967a-a06a2b9a1192" containerID="f3f91d9cdb51d1fa3152c720dccb74e5776c9e85c36ca527d6a5f627c07567e9" exitCode=0 Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.008672 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bh2tb" event={"ID":"668dd17c-3a2a-48b7-967a-a06a2b9a1192","Type":"ContainerDied","Data":"f3f91d9cdb51d1fa3152c720dccb74e5776c9e85c36ca527d6a5f627c07567e9"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.011397 4705 generic.go:334] "Generic (PLEG): container finished" podID="cbb2778d-90da-49fd-ba99-d1d32efd55c4" containerID="e7711991b75acc17d904e4925d8e0fa40021318256631e5cfec4786d9a29b4d9" exitCode=0 Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.011426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tnl2q" event={"ID":"cbb2778d-90da-49fd-ba99-d1d32efd55c4","Type":"ContainerDied","Data":"e7711991b75acc17d904e4925d8e0fa40021318256631e5cfec4786d9a29b4d9"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.014319 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c02cc76-2478-43fb-a558-70fa32999210" containerID="c9db6cb5d44ff1811594d4fc63402bd6f262227537974c0126beb810e1471bcc" exitCode=0 Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.014398 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pwc24" event={"ID":"9c02cc76-2478-43fb-a558-70fa32999210","Type":"ContainerDied","Data":"c9db6cb5d44ff1811594d4fc63402bd6f262227537974c0126beb810e1471bcc"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.022887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77b89ffb5c-ntbn2" event={"ID":"8fc5b3f3-290d-493a-921c-b114b7c2fd98","Type":"ContainerDied","Data":"1b7649ced2a6e2d92bd80a266a6553c3396fa6c284a4f050c96f09bba8efbe30"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.022951 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7649ced2a6e2d92bd80a266a6553c3396fa6c284a4f050c96f09bba8efbe30" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.041610 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c44b4bd9-f2df-4ca0-8214-904cd04b10e8","Type":"ContainerDied","Data":"bf8f0391836a531bd9f53a53b6d93ae8238a5b53c0c748d651572e0855fb24d1"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.041992 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8f0391836a531bd9f53a53b6d93ae8238a5b53c0c748d651572e0855fb24d1" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.044015 4705 generic.go:334] "Generic (PLEG): container finished" podID="b41ea1a4-fd22-4665-83e1-6e06bae9aeb5" containerID="dc79bb1062b394b73060f2c208f26d8ae56448e927153af0ff7ce67e1bf9f7f1" exitCode=0 Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.044063 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5829-account-create-update-wq6kb" event={"ID":"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5","Type":"ContainerDied","Data":"dc79bb1062b394b73060f2c208f26d8ae56448e927153af0ff7ce67e1bf9f7f1"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.047340 4705 generic.go:334] "Generic (PLEG): container finished" podID="fe428932-5a60-4f4d-baf2-a970bd94e6e4" containerID="775a4bc540b450bf870ac68f95f1e3738c96e27bafb77bdaf59cd11634831fbf" exitCode=0 Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.047370 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d619-account-create-update-t2827" event={"ID":"fe428932-5a60-4f4d-baf2-a970bd94e6e4","Type":"ContainerDied","Data":"775a4bc540b450bf870ac68f95f1e3738c96e27bafb77bdaf59cd11634831fbf"} Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.144680 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.163074 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.171329 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.298555 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr8k4\" (UniqueName: \"kubernetes.io/projected/699f900a-a29f-4b4b-b38a-102f8d440596-kube-api-access-nr8k4\") pod \"699f900a-a29f-4b4b-b38a-102f8d440596\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.298619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-combined-ca-bundle\") pod \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.298646 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xktl4\" (UniqueName: \"kubernetes.io/projected/8fc5b3f3-290d-493a-921c-b114b7c2fd98-kube-api-access-xktl4\") pod \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.298670 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.298735 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-logs\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.298803 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-combined-ca-bundle\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.298949 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-config-data\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299008 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-combined-ca-bundle\") pod \"699f900a-a29f-4b4b-b38a-102f8d440596\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299043 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-scripts\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299071 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lgtg\" (UniqueName: \"kubernetes.io/projected/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-kube-api-access-4lgtg\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data\") pod \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299148 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data\") pod \"699f900a-a29f-4b4b-b38a-102f8d440596\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299181 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-internal-tls-certs\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299227 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-httpd-run\") pod \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\" (UID: \"c44b4bd9-f2df-4ca0-8214-904cd04b10e8\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299305 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data-custom\") pod \"699f900a-a29f-4b4b-b38a-102f8d440596\" (UID: \"699f900a-a29f-4b4b-b38a-102f8d440596\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.299345 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data-custom\") pod \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\" (UID: \"8fc5b3f3-290d-493a-921c-b114b7c2fd98\") " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.303670 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.363153 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-logs" (OuterVolumeSpecName: "logs") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.369904 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8fc5b3f3-290d-493a-921c-b114b7c2fd98" (UID: "8fc5b3f3-290d-493a-921c-b114b7c2fd98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.370092 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.370319 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699f900a-a29f-4b4b-b38a-102f8d440596-kube-api-access-nr8k4" (OuterVolumeSpecName: "kube-api-access-nr8k4") pod "699f900a-a29f-4b4b-b38a-102f8d440596" (UID: "699f900a-a29f-4b4b-b38a-102f8d440596"). InnerVolumeSpecName "kube-api-access-nr8k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.413020 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr8k4\" (UniqueName: \"kubernetes.io/projected/699f900a-a29f-4b4b-b38a-102f8d440596-kube-api-access-nr8k4\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.413126 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.413149 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.413164 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.413226 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.428976 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-kube-api-access-4lgtg" (OuterVolumeSpecName: "kube-api-access-4lgtg") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "kube-api-access-4lgtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.431312 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc5b3f3-290d-493a-921c-b114b7c2fd98-kube-api-access-xktl4" (OuterVolumeSpecName: "kube-api-access-xktl4") pod "8fc5b3f3-290d-493a-921c-b114b7c2fd98" (UID: "8fc5b3f3-290d-493a-921c-b114b7c2fd98"). InnerVolumeSpecName "kube-api-access-xktl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.437105 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-scripts" (OuterVolumeSpecName: "scripts") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.490112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "699f900a-a29f-4b4b-b38a-102f8d440596" (UID: "699f900a-a29f-4b4b-b38a-102f8d440596"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.517784 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.527151 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lgtg\" (UniqueName: \"kubernetes.io/projected/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-kube-api-access-4lgtg\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.527252 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.527271 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xktl4\" (UniqueName: \"kubernetes.io/projected/8fc5b3f3-290d-493a-921c-b114b7c2fd98-kube-api-access-xktl4\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.556889 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.565674 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.598186 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "699f900a-a29f-4b4b-b38a-102f8d440596" (UID: "699f900a-a29f-4b4b-b38a-102f8d440596"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.610146 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fc5b3f3-290d-493a-921c-b114b7c2fd98" (UID: "8fc5b3f3-290d-493a-921c-b114b7c2fd98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.634708 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.635596 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.635655 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.635671 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.647745 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.654207 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data" (OuterVolumeSpecName: "config-data") pod "8fc5b3f3-290d-493a-921c-b114b7c2fd98" (UID: "8fc5b3f3-290d-493a-921c-b114b7c2fd98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.654406 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-config-data" (OuterVolumeSpecName: "config-data") pod "c44b4bd9-f2df-4ca0-8214-904cd04b10e8" (UID: "c44b4bd9-f2df-4ca0-8214-904cd04b10e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.674565 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data" (OuterVolumeSpecName: "config-data") pod "699f900a-a29f-4b4b-b38a-102f8d440596" (UID: "699f900a-a29f-4b4b-b38a-102f8d440596"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.738232 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.738303 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc5b3f3-290d-493a-921c-b114b7c2fd98-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.738323 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699f900a-a29f-4b4b-b38a-102f8d440596-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:04 crc kubenswrapper[4705]: I0124 08:03:04.738337 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b4bd9-f2df-4ca0-8214-904cd04b10e8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.056653 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b8dc4cdfc-hwv79" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.056885 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77b89ffb5c-ntbn2" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.058656 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.162600 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b8dc4cdfc-hwv79"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.188857 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b8dc4cdfc-hwv79"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.220918 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77b89ffb5c-ntbn2"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.231366 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-77b89ffb5c-ntbn2"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.247168 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.278932 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.288298 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:03:05 crc kubenswrapper[4705]: E0124 08:03:05.289074 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerName="heat-api" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289090 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerName="heat-api" Jan 24 08:03:05 crc kubenswrapper[4705]: E0124 08:03:05.289110 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerName="heat-api" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289118 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerName="heat-api" Jan 24 08:03:05 crc kubenswrapper[4705]: E0124 08:03:05.289133 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" containerName="heat-cfnapi" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289139 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" containerName="heat-cfnapi" Jan 24 08:03:05 crc kubenswrapper[4705]: E0124 08:03:05.289152 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-log" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289158 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-log" Jan 24 08:03:05 crc kubenswrapper[4705]: E0124 08:03:05.289175 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" containerName="heat-cfnapi" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289183 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" containerName="heat-cfnapi" Jan 24 08:03:05 crc kubenswrapper[4705]: E0124 08:03:05.289203 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-httpd" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289209 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-httpd" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289395 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-log" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289411 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerName="heat-api" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289421 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" containerName="heat-cfnapi" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289428 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" containerName="glance-httpd" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289436 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" containerName="heat-api" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.289802 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" containerName="heat-cfnapi" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.290560 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.295726 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.299159 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.302810 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.356136 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.356606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp56l\" (UniqueName: \"kubernetes.io/projected/2ddfb089-24fd-436d-9f98-df7b3933d5f1-kube-api-access-hp56l\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.356732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ddfb089-24fd-436d-9f98-df7b3933d5f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.356843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.356952 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ddfb089-24fd-436d-9f98-df7b3933d5f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.357043 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.357150 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.357275 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ddfb089-24fd-436d-9f98-df7b3933d5f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459131 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459315 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp56l\" (UniqueName: \"kubernetes.io/projected/2ddfb089-24fd-436d-9f98-df7b3933d5f1-kube-api-access-hp56l\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ddfb089-24fd-436d-9f98-df7b3933d5f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.459365 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.462801 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.463201 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ddfb089-24fd-436d-9f98-df7b3933d5f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.463457 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ddfb089-24fd-436d-9f98-df7b3933d5f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.486320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.502413 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.504202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.513926 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ddfb089-24fd-436d-9f98-df7b3933d5f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.532673 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp56l\" (UniqueName: \"kubernetes.io/projected/2ddfb089-24fd-436d-9f98-df7b3933d5f1-kube-api-access-hp56l\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.541456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ddfb089-24fd-436d-9f98-df7b3933d5f1\") " pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.628800 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699f900a-a29f-4b4b-b38a-102f8d440596" path="/var/lib/kubelet/pods/699f900a-a29f-4b4b-b38a-102f8d440596/volumes" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.629435 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc5b3f3-290d-493a-921c-b114b7c2fd98" path="/var/lib/kubelet/pods/8fc5b3f3-290d-493a-921c-b114b7c2fd98/volumes" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.630109 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44b4bd9-f2df-4ca0-8214-904cd04b10e8" path="/var/lib/kubelet/pods/c44b4bd9-f2df-4ca0-8214-904cd04b10e8/volumes" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.631751 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.632174 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-log" containerID="cri-o://a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3" gracePeriod=30 Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.633091 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-httpd" containerID="cri-o://c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549" gracePeriod=30 Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.642168 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.695619 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.778283 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4srxw\" (UniqueName: \"kubernetes.io/projected/668dd17c-3a2a-48b7-967a-a06a2b9a1192-kube-api-access-4srxw\") pod \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.778384 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668dd17c-3a2a-48b7-967a-a06a2b9a1192-operator-scripts\") pod \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\" (UID: \"668dd17c-3a2a-48b7-967a-a06a2b9a1192\") " Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.782276 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668dd17c-3a2a-48b7-967a-a06a2b9a1192-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "668dd17c-3a2a-48b7-967a-a06a2b9a1192" (UID: "668dd17c-3a2a-48b7-967a-a06a2b9a1192"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.809151 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668dd17c-3a2a-48b7-967a-a06a2b9a1192-kube-api-access-4srxw" (OuterVolumeSpecName: "kube-api-access-4srxw") pod "668dd17c-3a2a-48b7-967a-a06a2b9a1192" (UID: "668dd17c-3a2a-48b7-967a-a06a2b9a1192"). InnerVolumeSpecName "kube-api-access-4srxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.882934 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4srxw\" (UniqueName: \"kubernetes.io/projected/668dd17c-3a2a-48b7-967a-a06a2b9a1192-kube-api-access-4srxw\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:05 crc kubenswrapper[4705]: I0124 08:03:05.882983 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668dd17c-3a2a-48b7-967a-a06a2b9a1192-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.047717 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7798c79c68-jdzb7" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.090756 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bh2tb" event={"ID":"668dd17c-3a2a-48b7-967a-a06a2b9a1192","Type":"ContainerDied","Data":"002f936acc54c837162d1007f18fc7f0f585b8c56ced015a46998871087b57ed"} Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.090804 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="002f936acc54c837162d1007f18fc7f0f585b8c56ced015a46998871087b57ed" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.090892 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bh2tb" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.145748 4705 generic.go:334] "Generic (PLEG): container finished" podID="2751ca08-c852-4a96-85db-d8ace6894326" containerID="a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3" exitCode=143 Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.145969 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2751ca08-c852-4a96-85db-d8ace6894326","Type":"ContainerDied","Data":"a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3"} Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.163065 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.235222 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-dbcdf5676-895jp"] Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.235990 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-dbcdf5676-895jp" podUID="b2713791-b8f1-47ae-a438-c2f5e97ef433" containerName="heat-engine" containerID="cri-o://b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" gracePeriod=60 Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.239307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qr46\" (UniqueName: \"kubernetes.io/projected/fe428932-5a60-4f4d-baf2-a970bd94e6e4-kube-api-access-8qr46\") pod \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.239379 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe428932-5a60-4f4d-baf2-a970bd94e6e4-operator-scripts\") pod \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\" (UID: \"fe428932-5a60-4f4d-baf2-a970bd94e6e4\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.241814 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe428932-5a60-4f4d-baf2-a970bd94e6e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe428932-5a60-4f4d-baf2-a970bd94e6e4" (UID: "fe428932-5a60-4f4d-baf2-a970bd94e6e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.252142 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe428932-5a60-4f4d-baf2-a970bd94e6e4-kube-api-access-8qr46" (OuterVolumeSpecName: "kube-api-access-8qr46") pod "fe428932-5a60-4f4d-baf2-a970bd94e6e4" (UID: "fe428932-5a60-4f4d-baf2-a970bd94e6e4"). InnerVolumeSpecName "kube-api-access-8qr46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.304195 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.318452 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwc24" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.332597 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.343974 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe428932-5a60-4f4d-baf2-a970bd94e6e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.344019 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qr46\" (UniqueName: \"kubernetes.io/projected/fe428932-5a60-4f4d-baf2-a970bd94e6e4-kube-api-access-8qr46\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.355299 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.448817 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfjqj\" (UniqueName: \"kubernetes.io/projected/9c02cc76-2478-43fb-a558-70fa32999210-kube-api-access-jfjqj\") pod \"9c02cc76-2478-43fb-a558-70fa32999210\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.449234 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c02cc76-2478-43fb-a558-70fa32999210-operator-scripts\") pod \"9c02cc76-2478-43fb-a558-70fa32999210\" (UID: \"9c02cc76-2478-43fb-a558-70fa32999210\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.449367 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx7h2\" (UniqueName: \"kubernetes.io/projected/3599801a-a9fe-4efe-9f2d-5af2507a76ed-kube-api-access-tx7h2\") pod \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.449399 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599801a-a9fe-4efe-9f2d-5af2507a76ed-operator-scripts\") pod \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\" (UID: \"3599801a-a9fe-4efe-9f2d-5af2507a76ed\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.449477 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-operator-scripts\") pod \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.449508 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2znhr\" (UniqueName: \"kubernetes.io/projected/cbb2778d-90da-49fd-ba99-d1d32efd55c4-kube-api-access-2znhr\") pod \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.449556 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb2778d-90da-49fd-ba99-d1d32efd55c4-operator-scripts\") pod \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\" (UID: \"cbb2778d-90da-49fd-ba99-d1d32efd55c4\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.449584 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmkzm\" (UniqueName: \"kubernetes.io/projected/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-kube-api-access-xmkzm\") pod \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\" (UID: \"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5\") " Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.454194 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-kube-api-access-xmkzm" (OuterVolumeSpecName: "kube-api-access-xmkzm") pod "b41ea1a4-fd22-4665-83e1-6e06bae9aeb5" (UID: "b41ea1a4-fd22-4665-83e1-6e06bae9aeb5"). InnerVolumeSpecName "kube-api-access-xmkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.454520 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3599801a-a9fe-4efe-9f2d-5af2507a76ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3599801a-a9fe-4efe-9f2d-5af2507a76ed" (UID: "3599801a-a9fe-4efe-9f2d-5af2507a76ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.454569 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b41ea1a4-fd22-4665-83e1-6e06bae9aeb5" (UID: "b41ea1a4-fd22-4665-83e1-6e06bae9aeb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.455957 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c02cc76-2478-43fb-a558-70fa32999210-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c02cc76-2478-43fb-a558-70fa32999210" (UID: "9c02cc76-2478-43fb-a558-70fa32999210"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.456337 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb2778d-90da-49fd-ba99-d1d32efd55c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbb2778d-90da-49fd-ba99-d1d32efd55c4" (UID: "cbb2778d-90da-49fd-ba99-d1d32efd55c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.457266 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb2778d-90da-49fd-ba99-d1d32efd55c4-kube-api-access-2znhr" (OuterVolumeSpecName: "kube-api-access-2znhr") pod "cbb2778d-90da-49fd-ba99-d1d32efd55c4" (UID: "cbb2778d-90da-49fd-ba99-d1d32efd55c4"). InnerVolumeSpecName "kube-api-access-2znhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.474167 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3599801a-a9fe-4efe-9f2d-5af2507a76ed-kube-api-access-tx7h2" (OuterVolumeSpecName: "kube-api-access-tx7h2") pod "3599801a-a9fe-4efe-9f2d-5af2507a76ed" (UID: "3599801a-a9fe-4efe-9f2d-5af2507a76ed"). InnerVolumeSpecName "kube-api-access-tx7h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.475201 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c02cc76-2478-43fb-a558-70fa32999210-kube-api-access-jfjqj" (OuterVolumeSpecName: "kube-api-access-jfjqj") pod "9c02cc76-2478-43fb-a558-70fa32999210" (UID: "9c02cc76-2478-43fb-a558-70fa32999210"). InnerVolumeSpecName "kube-api-access-jfjqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.553893 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx7h2\" (UniqueName: \"kubernetes.io/projected/3599801a-a9fe-4efe-9f2d-5af2507a76ed-kube-api-access-tx7h2\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.555239 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3599801a-a9fe-4efe-9f2d-5af2507a76ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.555272 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.555285 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2znhr\" (UniqueName: \"kubernetes.io/projected/cbb2778d-90da-49fd-ba99-d1d32efd55c4-kube-api-access-2znhr\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.555297 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb2778d-90da-49fd-ba99-d1d32efd55c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.555308 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmkzm\" (UniqueName: \"kubernetes.io/projected/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5-kube-api-access-xmkzm\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.555321 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfjqj\" (UniqueName: \"kubernetes.io/projected/9c02cc76-2478-43fb-a558-70fa32999210-kube-api-access-jfjqj\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.555332 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c02cc76-2478-43fb-a558-70fa32999210-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:06 crc kubenswrapper[4705]: W0124 08:03:06.842121 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ddfb089_24fd_436d_9f98_df7b3933d5f1.slice/crio-398e77b2edc87446a0eb2b7ba6f7ac7babe226a75d8d00235d83bfc0a92d6a26 WatchSource:0}: Error finding container 398e77b2edc87446a0eb2b7ba6f7ac7babe226a75d8d00235d83bfc0a92d6a26: Status 404 returned error can't find the container with id 398e77b2edc87446a0eb2b7ba6f7ac7babe226a75d8d00235d83bfc0a92d6a26 Jan 24 08:03:06 crc kubenswrapper[4705]: I0124 08:03:06.842978 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.175096 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5829-account-create-update-wq6kb" event={"ID":"b41ea1a4-fd22-4665-83e1-6e06bae9aeb5","Type":"ContainerDied","Data":"e76d28eff396dd18bf5fa63404b6ef239789488b691cf4d6af8db750ee7b1706"} Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.175461 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e76d28eff396dd18bf5fa63404b6ef239789488b691cf4d6af8db750ee7b1706" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.175528 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5829-account-create-update-wq6kb" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.275081 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d619-account-create-update-t2827" event={"ID":"fe428932-5a60-4f4d-baf2-a970bd94e6e4","Type":"ContainerDied","Data":"eddf4efead3992a5223fb81e2d53ae9ba52173730702b2404ee3bbe9f4e5f3ec"} Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.275144 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eddf4efead3992a5223fb81e2d53ae9ba52173730702b2404ee3bbe9f4e5f3ec" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.275264 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d619-account-create-update-t2827" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.300951 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ddfb089-24fd-436d-9f98-df7b3933d5f1","Type":"ContainerStarted","Data":"398e77b2edc87446a0eb2b7ba6f7ac7babe226a75d8d00235d83bfc0a92d6a26"} Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.321852 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pwc24" event={"ID":"9c02cc76-2478-43fb-a558-70fa32999210","Type":"ContainerDied","Data":"5aa620c05e2ac2326fbb9a4f5041b5b52633ee350d990fed620e6ef75e1b0f39"} Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.321919 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa620c05e2ac2326fbb9a4f5041b5b52633ee350d990fed620e6ef75e1b0f39" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.321983 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwc24" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.343317 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tnl2q" event={"ID":"cbb2778d-90da-49fd-ba99-d1d32efd55c4","Type":"ContainerDied","Data":"7c53192bd5ca43f9d8c09510f15d882faac3771e7743ef50c75a0a79dd8503bf"} Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.343378 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c53192bd5ca43f9d8c09510f15d882faac3771e7743ef50c75a0a79dd8503bf" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.343478 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnl2q" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.410757 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8111-account-create-update-29vdk" event={"ID":"3599801a-a9fe-4efe-9f2d-5af2507a76ed","Type":"ContainerDied","Data":"1d747e35a2d65a3937e100e9f9a3dc3a01b0a5e21e7c9f857d34d23f1ef14bed"} Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.410808 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d747e35a2d65a3937e100e9f9a3dc3a01b0a5e21e7c9f857d34d23f1ef14bed" Jan 24 08:03:07 crc kubenswrapper[4705]: I0124 08:03:07.410913 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8111-account-create-update-29vdk" Jan 24 08:03:08 crc kubenswrapper[4705]: I0124 08:03:08.423993 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ddfb089-24fd-436d-9f98-df7b3933d5f1","Type":"ContainerStarted","Data":"3cd64be1125a7f1453ff1088dcb93d845fb54948cca0ff0434c642de976c5ed9"} Jan 24 08:03:08 crc kubenswrapper[4705]: I0124 08:03:08.424553 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ddfb089-24fd-436d-9f98-df7b3933d5f1","Type":"ContainerStarted","Data":"fdf2fb9a054620a096bf10d646476ccc34c90fbeba248596881e029fe942048c"} Jan 24 08:03:08 crc kubenswrapper[4705]: I0124 08:03:08.450561 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.450540717 podStartE2EDuration="3.450540717s" podCreationTimestamp="2026-01-24 08:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:03:08.445583088 +0000 UTC m=+1327.165456386" watchObservedRunningTime="2026-01-24 08:03:08.450540717 +0000 UTC m=+1327.170414005" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.292434 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.304203 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.310168 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.310252 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-dbcdf5676-895jp" podUID="b2713791-b8f1-47ae-a438-c2f5e97ef433" containerName="heat-engine" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.391324 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.436857 4705 generic.go:334] "Generic (PLEG): container finished" podID="2751ca08-c852-4a96-85db-d8ace6894326" containerID="c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549" exitCode=0 Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.436944 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2751ca08-c852-4a96-85db-d8ace6894326","Type":"ContainerDied","Data":"c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549"} Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.436967 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.436989 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2751ca08-c852-4a96-85db-d8ace6894326","Type":"ContainerDied","Data":"6aee62d3ae4e65139b61f5a936c1792f87a624f4b31a31cdf704f257a202a256"} Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.437010 4705 scope.go:117] "RemoveContainer" containerID="c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.478142 4705 scope.go:117] "RemoveContainer" containerID="a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.504019 4705 scope.go:117] "RemoveContainer" containerID="c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.504702 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549\": container with ID starting with c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549 not found: ID does not exist" containerID="c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.504787 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549"} err="failed to get container status \"c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549\": rpc error: code = NotFound desc = could not find container \"c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549\": container with ID starting with c97abda2052572a883392013120f7bd6fafd518ffbfe655f3059288a69738549 not found: ID does not exist" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.504975 4705 scope.go:117] "RemoveContainer" containerID="a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.505739 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3\": container with ID starting with a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3 not found: ID does not exist" containerID="a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.505785 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3"} err="failed to get container status \"a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3\": rpc error: code = NotFound desc = could not find container \"a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3\": container with ID starting with a4640449752b5c26ec4c660f738302119836ab01f915a786be81139c2b3de9c3 not found: ID does not exist" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.516590 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-httpd-run\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.516726 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-scripts\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.516773 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-public-tls-certs\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.516923 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzksz\" (UniqueName: \"kubernetes.io/projected/2751ca08-c852-4a96-85db-d8ace6894326-kube-api-access-zzksz\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.517033 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-logs\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.517200 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.517240 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-combined-ca-bundle\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.517268 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-config-data\") pod \"2751ca08-c852-4a96-85db-d8ace6894326\" (UID: \"2751ca08-c852-4a96-85db-d8ace6894326\") " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.517707 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-logs" (OuterVolumeSpecName: "logs") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.517912 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.520600 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.520628 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2751ca08-c852-4a96-85db-d8ace6894326-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.527448 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.530114 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-scripts" (OuterVolumeSpecName: "scripts") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.535010 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2751ca08-c852-4a96-85db-d8ace6894326-kube-api-access-zzksz" (OuterVolumeSpecName: "kube-api-access-zzksz") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "kube-api-access-zzksz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.598990 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.611198 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.613231 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-config-data" (OuterVolumeSpecName: "config-data") pod "2751ca08-c852-4a96-85db-d8ace6894326" (UID: "2751ca08-c852-4a96-85db-d8ace6894326"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.627447 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.639027 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.639101 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.639121 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.639130 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2751ca08-c852-4a96-85db-d8ace6894326-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.639153 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzksz\" (UniqueName: \"kubernetes.io/projected/2751ca08-c852-4a96-85db-d8ace6894326-kube-api-access-zzksz\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.659292 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.743048 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.789201 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.800075 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.817300 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818108 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb2778d-90da-49fd-ba99-d1d32efd55c4" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818131 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb2778d-90da-49fd-ba99-d1d32efd55c4" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818170 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-log" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818179 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-log" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818197 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668dd17c-3a2a-48b7-967a-a06a2b9a1192" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818204 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="668dd17c-3a2a-48b7-967a-a06a2b9a1192" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818245 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-httpd" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818254 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-httpd" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818264 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c02cc76-2478-43fb-a558-70fa32999210" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818270 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c02cc76-2478-43fb-a558-70fa32999210" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818286 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3599801a-a9fe-4efe-9f2d-5af2507a76ed" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818310 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3599801a-a9fe-4efe-9f2d-5af2507a76ed" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818325 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe428932-5a60-4f4d-baf2-a970bd94e6e4" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818331 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe428932-5a60-4f4d-baf2-a970bd94e6e4" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: E0124 08:03:09.818345 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41ea1a4-fd22-4665-83e1-6e06bae9aeb5" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818351 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41ea1a4-fd22-4665-83e1-6e06bae9aeb5" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818654 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe428932-5a60-4f4d-baf2-a970bd94e6e4" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818673 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41ea1a4-fd22-4665-83e1-6e06bae9aeb5" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818686 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-httpd" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818695 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="668dd17c-3a2a-48b7-967a-a06a2b9a1192" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818722 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2751ca08-c852-4a96-85db-d8ace6894326" containerName="glance-log" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818731 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb2778d-90da-49fd-ba99-d1d32efd55c4" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818747 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c02cc76-2478-43fb-a558-70fa32999210" containerName="mariadb-database-create" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.818758 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3599801a-a9fe-4efe-9f2d-5af2507a76ed" containerName="mariadb-account-create-update" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.821259 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.824753 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.824985 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.843894 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-config-data\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.844292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.844401 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.844479 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-scripts\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.844593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdr5b\" (UniqueName: \"kubernetes.io/projected/881a6a33-1c19-4868-b1d8-ff8efde83513-kube-api-access-kdr5b\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.844691 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/881a6a33-1c19-4868-b1d8-ff8efde83513-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.844774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.844921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/881a6a33-1c19-4868-b1d8-ff8efde83513-logs\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.858891 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.946640 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-scripts\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.946705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.946812 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdr5b\" (UniqueName: \"kubernetes.io/projected/881a6a33-1c19-4868-b1d8-ff8efde83513-kube-api-access-kdr5b\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.946899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/881a6a33-1c19-4868-b1d8-ff8efde83513-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.946949 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.946980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/881a6a33-1c19-4868-b1d8-ff8efde83513-logs\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.947566 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.947782 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-config-data\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.947892 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/881a6a33-1c19-4868-b1d8-ff8efde83513-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.948287 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.948530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/881a6a33-1c19-4868-b1d8-ff8efde83513-logs\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.959856 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.966296 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-config-data\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.971030 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdr5b\" (UniqueName: \"kubernetes.io/projected/881a6a33-1c19-4868-b1d8-ff8efde83513-kube-api-access-kdr5b\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.981807 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:09 crc kubenswrapper[4705]: I0124 08:03:09.989413 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881a6a33-1c19-4868-b1d8-ff8efde83513-scripts\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:10 crc kubenswrapper[4705]: I0124 08:03:10.018543 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"881a6a33-1c19-4868-b1d8-ff8efde83513\") " pod="openstack/glance-default-external-api-0" Jan 24 08:03:10 crc kubenswrapper[4705]: I0124 08:03:10.150962 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 08:03:10 crc kubenswrapper[4705]: I0124 08:03:10.935076 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 08:03:10 crc kubenswrapper[4705]: W0124 08:03:10.954092 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod881a6a33_1c19_4868_b1d8_ff8efde83513.slice/crio-c5d8cce4a5ab775e86a022f2b07c206b945834daee71c9dcabe3a542b9545685 WatchSource:0}: Error finding container c5d8cce4a5ab775e86a022f2b07c206b945834daee71c9dcabe3a542b9545685: Status 404 returned error can't find the container with id c5d8cce4a5ab775e86a022f2b07c206b945834daee71c9dcabe3a542b9545685 Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.183966 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8qzw5"] Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.185683 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.207635 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8qzw5"] Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.211614 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.211731 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7n4w4" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.212017 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.280135 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-config-data\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.280631 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.280735 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-scripts\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.280843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7x8\" (UniqueName: \"kubernetes.io/projected/cedd21a3-3cd7-415b-b600-7510e930b2a5-kube-api-access-gf7x8\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.382677 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-config-data\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.382782 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.382859 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-scripts\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.382920 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7x8\" (UniqueName: \"kubernetes.io/projected/cedd21a3-3cd7-415b-b600-7510e930b2a5-kube-api-access-gf7x8\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.389293 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-config-data\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.390894 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.397435 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-scripts\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.402302 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7x8\" (UniqueName: \"kubernetes.io/projected/cedd21a3-3cd7-415b-b600-7510e930b2a5-kube-api-access-gf7x8\") pod \"nova-cell0-conductor-db-sync-8qzw5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.499945 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"881a6a33-1c19-4868-b1d8-ff8efde83513","Type":"ContainerStarted","Data":"c5d8cce4a5ab775e86a022f2b07c206b945834daee71c9dcabe3a542b9545685"} Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.515280 4705 generic.go:334] "Generic (PLEG): container finished" podID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerID="117157bc71177979938c47b23953388339e81f185bf62f235ed5caa6f63abd4e" exitCode=0 Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.515380 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerDied","Data":"117157bc71177979938c47b23953388339e81f185bf62f235ed5caa6f63abd4e"} Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.566192 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.673510 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2751ca08-c852-4a96-85db-d8ace6894326" path="/var/lib/kubelet/pods/2751ca08-c852-4a96-85db-d8ace6894326/volumes" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.811740 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.913222 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn2jq\" (UniqueName: \"kubernetes.io/projected/e29114b1-8009-4cf4-8eef-20c19e0687d2-kube-api-access-cn2jq\") pod \"e29114b1-8009-4cf4-8eef-20c19e0687d2\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.914232 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-scripts\") pod \"e29114b1-8009-4cf4-8eef-20c19e0687d2\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.914402 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-log-httpd\") pod \"e29114b1-8009-4cf4-8eef-20c19e0687d2\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.914530 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-config-data\") pod \"e29114b1-8009-4cf4-8eef-20c19e0687d2\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.914704 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-combined-ca-bundle\") pod \"e29114b1-8009-4cf4-8eef-20c19e0687d2\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.914813 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-run-httpd\") pod \"e29114b1-8009-4cf4-8eef-20c19e0687d2\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.914932 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-sg-core-conf-yaml\") pod \"e29114b1-8009-4cf4-8eef-20c19e0687d2\" (UID: \"e29114b1-8009-4cf4-8eef-20c19e0687d2\") " Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.917211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e29114b1-8009-4cf4-8eef-20c19e0687d2" (UID: "e29114b1-8009-4cf4-8eef-20c19e0687d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.918375 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e29114b1-8009-4cf4-8eef-20c19e0687d2" (UID: "e29114b1-8009-4cf4-8eef-20c19e0687d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.937854 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29114b1-8009-4cf4-8eef-20c19e0687d2-kube-api-access-cn2jq" (OuterVolumeSpecName: "kube-api-access-cn2jq") pod "e29114b1-8009-4cf4-8eef-20c19e0687d2" (UID: "e29114b1-8009-4cf4-8eef-20c19e0687d2"). InnerVolumeSpecName "kube-api-access-cn2jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.938517 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-scripts" (OuterVolumeSpecName: "scripts") pod "e29114b1-8009-4cf4-8eef-20c19e0687d2" (UID: "e29114b1-8009-4cf4-8eef-20c19e0687d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:11 crc kubenswrapper[4705]: I0124 08:03:11.967243 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e29114b1-8009-4cf4-8eef-20c19e0687d2" (UID: "e29114b1-8009-4cf4-8eef-20c19e0687d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.021124 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.021165 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.021174 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e29114b1-8009-4cf4-8eef-20c19e0687d2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.021186 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.021197 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn2jq\" (UniqueName: \"kubernetes.io/projected/e29114b1-8009-4cf4-8eef-20c19e0687d2-kube-api-access-cn2jq\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.116264 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e29114b1-8009-4cf4-8eef-20c19e0687d2" (UID: "e29114b1-8009-4cf4-8eef-20c19e0687d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.124134 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.145955 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-config-data" (OuterVolumeSpecName: "config-data") pod "e29114b1-8009-4cf4-8eef-20c19e0687d2" (UID: "e29114b1-8009-4cf4-8eef-20c19e0687d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.200149 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8qzw5"] Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.227008 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29114b1-8009-4cf4-8eef-20c19e0687d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.539909 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"881a6a33-1c19-4868-b1d8-ff8efde83513","Type":"ContainerStarted","Data":"c0e6833f0129935994bf6f27ec9942d56b5ae82a51fa94e689295f83695d7ce4"} Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.546922 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e29114b1-8009-4cf4-8eef-20c19e0687d2","Type":"ContainerDied","Data":"49429341439b6d330b9a7ca8de7c064617feaabfb540f2ff6b3bedf7d4877861"} Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.546981 4705 scope.go:117] "RemoveContainer" containerID="660db2ffdc230fb53e155a5e9ca2ecfec9529872b4d33ff525ffe1c43cd51e9e" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.547128 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.550380 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" event={"ID":"cedd21a3-3cd7-415b-b600-7510e930b2a5","Type":"ContainerStarted","Data":"429f368c68d2539fd2aa128e7bce0c9a575641704f8fc86b22be1d0707a53021"} Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.603628 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.608136 4705 scope.go:117] "RemoveContainer" containerID="236ebae6c3383d9557e8018837e25830d73aaeaef7733e215d3f35920c19e2b5" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.623981 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.636959 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:12 crc kubenswrapper[4705]: E0124 08:03:12.637646 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="proxy-httpd" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.637663 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="proxy-httpd" Jan 24 08:03:12 crc kubenswrapper[4705]: E0124 08:03:12.637688 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-central-agent" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.637694 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-central-agent" Jan 24 08:03:12 crc kubenswrapper[4705]: E0124 08:03:12.637708 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-notification-agent" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.637714 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-notification-agent" Jan 24 08:03:12 crc kubenswrapper[4705]: E0124 08:03:12.637723 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="sg-core" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.637729 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="sg-core" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.637984 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-notification-agent" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.638008 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="ceilometer-central-agent" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.638021 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="proxy-httpd" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.638037 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" containerName="sg-core" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.640068 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.648032 4705 scope.go:117] "RemoveContainer" containerID="bbfab8c694233fdc9d54b76b938cefb811ba313a1b1eb4bfcf3e45e9a6eed435" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.650921 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.654209 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.670028 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.710093 4705 scope.go:117] "RemoveContainer" containerID="117157bc71177979938c47b23953388339e81f185bf62f235ed5caa6f63abd4e" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.740239 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-config-data\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.740295 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.740322 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.740385 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khbwm\" (UniqueName: \"kubernetes.io/projected/5dced028-fab3-448a-bb6b-06ce8196a523-kube-api-access-khbwm\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.740429 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-scripts\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.740502 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-run-httpd\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:12 crc kubenswrapper[4705]: I0124 08:03:12.740560 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-log-httpd\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:12.851166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-run-httpd\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:12.851332 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-log-httpd\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:12.851402 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-config-data\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:12.851468 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:12.851500 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:12.851539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khbwm\" (UniqueName: \"kubernetes.io/projected/5dced028-fab3-448a-bb6b-06ce8196a523-kube-api-access-khbwm\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:12.851612 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-scripts\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.022466 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-run-httpd\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.024054 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-log-httpd\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.034768 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-config-data\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.066004 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khbwm\" (UniqueName: \"kubernetes.io/projected/5dced028-fab3-448a-bb6b-06ce8196a523-kube-api-access-khbwm\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.075369 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.080151 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.089276 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-scripts\") pod \"ceilometer-0\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.270042 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.569669 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"881a6a33-1c19-4868-b1d8-ff8efde83513","Type":"ContainerStarted","Data":"f45e265bdb42dd80287a6def8827658e0f464616e6e4e53552d91f6f607e61a8"} Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.592345 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29114b1-8009-4cf4-8eef-20c19e0687d2" path="/var/lib/kubelet/pods/e29114b1-8009-4cf4-8eef-20c19e0687d2/volumes" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.603444 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.603417816 podStartE2EDuration="4.603417816s" podCreationTimestamp="2026-01-24 08:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:03:13.600227277 +0000 UTC m=+1332.320100565" watchObservedRunningTime="2026-01-24 08:03:13.603417816 +0000 UTC m=+1332.323291104" Jan 24 08:03:13 crc kubenswrapper[4705]: I0124 08:03:13.845643 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:14 crc kubenswrapper[4705]: I0124 08:03:14.583242 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerStarted","Data":"2fae02bee1db7694da1c6a400e4a2ebb66a10f0ac40edc8ad1632e6b2f01a0b0"} Jan 24 08:03:15 crc kubenswrapper[4705]: I0124 08:03:15.643803 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:15 crc kubenswrapper[4705]: I0124 08:03:15.644233 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:15 crc kubenswrapper[4705]: I0124 08:03:15.662146 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerStarted","Data":"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16"} Jan 24 08:03:15 crc kubenswrapper[4705]: I0124 08:03:15.662229 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerStarted","Data":"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0"} Jan 24 08:03:15 crc kubenswrapper[4705]: I0124 08:03:15.706395 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:15 crc kubenswrapper[4705]: I0124 08:03:15.747250 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:16 crc kubenswrapper[4705]: I0124 08:03:16.679683 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerStarted","Data":"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0"} Jan 24 08:03:16 crc kubenswrapper[4705]: I0124 08:03:16.680446 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:16 crc kubenswrapper[4705]: I0124 08:03:16.680477 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:18 crc kubenswrapper[4705]: I0124 08:03:18.881599 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:18 crc kubenswrapper[4705]: I0124 08:03:18.882269 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 08:03:18 crc kubenswrapper[4705]: I0124 08:03:18.897218 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 08:03:19 crc kubenswrapper[4705]: E0124 08:03:19.294165 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8 is running failed: container process not found" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 24 08:03:19 crc kubenswrapper[4705]: E0124 08:03:19.295005 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8 is running failed: container process not found" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 24 08:03:19 crc kubenswrapper[4705]: E0124 08:03:19.298506 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8 is running failed: container process not found" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 24 08:03:19 crc kubenswrapper[4705]: E0124 08:03:19.298569 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-dbcdf5676-895jp" podUID="b2713791-b8f1-47ae-a438-c2f5e97ef433" containerName="heat-engine" Jan 24 08:03:19 crc kubenswrapper[4705]: I0124 08:03:19.714186 4705 generic.go:334] "Generic (PLEG): container finished" podID="b2713791-b8f1-47ae-a438-c2f5e97ef433" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" exitCode=0 Jan 24 08:03:19 crc kubenswrapper[4705]: I0124 08:03:19.714933 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dbcdf5676-895jp" event={"ID":"b2713791-b8f1-47ae-a438-c2f5e97ef433","Type":"ContainerDied","Data":"b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8"} Jan 24 08:03:20 crc kubenswrapper[4705]: I0124 08:03:20.151530 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 08:03:20 crc kubenswrapper[4705]: I0124 08:03:20.151619 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 08:03:20 crc kubenswrapper[4705]: I0124 08:03:20.195469 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 08:03:20 crc kubenswrapper[4705]: I0124 08:03:20.203976 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 08:03:20 crc kubenswrapper[4705]: I0124 08:03:20.726493 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 08:03:20 crc kubenswrapper[4705]: I0124 08:03:20.726526 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 08:03:22 crc kubenswrapper[4705]: I0124 08:03:22.801364 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 08:03:22 crc kubenswrapper[4705]: I0124 08:03:22.801388 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 08:03:23 crc kubenswrapper[4705]: I0124 08:03:23.195398 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:23 crc kubenswrapper[4705]: I0124 08:03:23.761945 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 08:03:23 crc kubenswrapper[4705]: I0124 08:03:23.829260 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 08:03:23 crc kubenswrapper[4705]: I0124 08:03:23.839317 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.467141 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:03:26 crc kubenswrapper[4705]: E0124 08:03:26.573075 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 24 08:03:26 crc kubenswrapper[4705]: E0124 08:03:26.573254 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf7x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-8qzw5_openstack(cedd21a3-3cd7-415b-b600-7510e930b2a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 08:03:26 crc kubenswrapper[4705]: E0124 08:03:26.575143 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" podUID="cedd21a3-3cd7-415b-b600-7510e930b2a5" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.592723 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data-custom\") pod \"b2713791-b8f1-47ae-a438-c2f5e97ef433\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.592856 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dxqk\" (UniqueName: \"kubernetes.io/projected/b2713791-b8f1-47ae-a438-c2f5e97ef433-kube-api-access-7dxqk\") pod \"b2713791-b8f1-47ae-a438-c2f5e97ef433\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.592912 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data\") pod \"b2713791-b8f1-47ae-a438-c2f5e97ef433\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.593543 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-combined-ca-bundle\") pod \"b2713791-b8f1-47ae-a438-c2f5e97ef433\" (UID: \"b2713791-b8f1-47ae-a438-c2f5e97ef433\") " Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.598889 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b2713791-b8f1-47ae-a438-c2f5e97ef433" (UID: "b2713791-b8f1-47ae-a438-c2f5e97ef433"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.599965 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2713791-b8f1-47ae-a438-c2f5e97ef433-kube-api-access-7dxqk" (OuterVolumeSpecName: "kube-api-access-7dxqk") pod "b2713791-b8f1-47ae-a438-c2f5e97ef433" (UID: "b2713791-b8f1-47ae-a438-c2f5e97ef433"). InnerVolumeSpecName "kube-api-access-7dxqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.752260 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.753479 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dxqk\" (UniqueName: \"kubernetes.io/projected/b2713791-b8f1-47ae-a438-c2f5e97ef433-kube-api-access-7dxqk\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.756985 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data" (OuterVolumeSpecName: "config-data") pod "b2713791-b8f1-47ae-a438-c2f5e97ef433" (UID: "b2713791-b8f1-47ae-a438-c2f5e97ef433"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.757111 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2713791-b8f1-47ae-a438-c2f5e97ef433" (UID: "b2713791-b8f1-47ae-a438-c2f5e97ef433"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.855845 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.856198 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2713791-b8f1-47ae-a438-c2f5e97ef433-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.893615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dbcdf5676-895jp" event={"ID":"b2713791-b8f1-47ae-a438-c2f5e97ef433","Type":"ContainerDied","Data":"c8cb21deda4ef7327d9fec807e9638f45b7508ec46af830ac40e9ab2c652468c"} Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.893677 4705 scope.go:117] "RemoveContainer" containerID="b3e3598aeab33154c13cc40f817158c06e1a8a8174eea892f41d5f80fcd943c8" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.894317 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dbcdf5676-895jp" Jan 24 08:03:26 crc kubenswrapper[4705]: E0124 08:03:26.896354 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" podUID="cedd21a3-3cd7-415b-b600-7510e930b2a5" Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.942448 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-dbcdf5676-895jp"] Jan 24 08:03:26 crc kubenswrapper[4705]: I0124 08:03:26.950662 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-dbcdf5676-895jp"] Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.587769 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2713791-b8f1-47ae-a438-c2f5e97ef433" path="/var/lib/kubelet/pods/b2713791-b8f1-47ae-a438-c2f5e97ef433/volumes" Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.903541 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerStarted","Data":"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5"} Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.903714 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-central-agent" containerID="cri-o://c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16" gracePeriod=30 Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.903797 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.904186 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="proxy-httpd" containerID="cri-o://ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5" gracePeriod=30 Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.904238 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="sg-core" containerID="cri-o://23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0" gracePeriod=30 Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.904272 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-notification-agent" containerID="cri-o://9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0" gracePeriod=30 Jan 24 08:03:27 crc kubenswrapper[4705]: I0124 08:03:27.935130 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.044083016 podStartE2EDuration="15.935110741s" podCreationTimestamp="2026-01-24 08:03:12 +0000 UTC" firstStartedPulling="2026-01-24 08:03:13.854336935 +0000 UTC m=+1332.574210223" lastFinishedPulling="2026-01-24 08:03:26.74536466 +0000 UTC m=+1345.465237948" observedRunningTime="2026-01-24 08:03:27.930174633 +0000 UTC m=+1346.650047921" watchObservedRunningTime="2026-01-24 08:03:27.935110741 +0000 UTC m=+1346.654984039" Jan 24 08:03:28 crc kubenswrapper[4705]: I0124 08:03:28.932205 4705 generic.go:334] "Generic (PLEG): container finished" podID="5dced028-fab3-448a-bb6b-06ce8196a523" containerID="ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5" exitCode=0 Jan 24 08:03:28 crc kubenswrapper[4705]: I0124 08:03:28.932249 4705 generic.go:334] "Generic (PLEG): container finished" podID="5dced028-fab3-448a-bb6b-06ce8196a523" containerID="23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0" exitCode=2 Jan 24 08:03:28 crc kubenswrapper[4705]: I0124 08:03:28.932277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerDied","Data":"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5"} Jan 24 08:03:28 crc kubenswrapper[4705]: I0124 08:03:28.932340 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerDied","Data":"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0"} Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.938565 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.946046 4705 generic.go:334] "Generic (PLEG): container finished" podID="5dced028-fab3-448a-bb6b-06ce8196a523" containerID="9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0" exitCode=0 Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.946088 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerDied","Data":"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0"} Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.946144 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerDied","Data":"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16"} Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.946170 4705 scope.go:117] "RemoveContainer" containerID="ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5" Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.946102 4705 generic.go:334] "Generic (PLEG): container finished" podID="5dced028-fab3-448a-bb6b-06ce8196a523" containerID="c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16" exitCode=0 Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.946197 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.946228 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5dced028-fab3-448a-bb6b-06ce8196a523","Type":"ContainerDied","Data":"2fae02bee1db7694da1c6a400e4a2ebb66a10f0ac40edc8ad1632e6b2f01a0b0"} Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.967277 4705 scope.go:117] "RemoveContainer" containerID="23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0" Jan 24 08:03:29 crc kubenswrapper[4705]: I0124 08:03:29.989479 4705 scope.go:117] "RemoveContainer" containerID="9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.022165 4705 scope.go:117] "RemoveContainer" containerID="c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.058034 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-sg-core-conf-yaml\") pod \"5dced028-fab3-448a-bb6b-06ce8196a523\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.058156 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khbwm\" (UniqueName: \"kubernetes.io/projected/5dced028-fab3-448a-bb6b-06ce8196a523-kube-api-access-khbwm\") pod \"5dced028-fab3-448a-bb6b-06ce8196a523\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.058220 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-run-httpd\") pod \"5dced028-fab3-448a-bb6b-06ce8196a523\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.058450 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-scripts\") pod \"5dced028-fab3-448a-bb6b-06ce8196a523\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.058849 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5dced028-fab3-448a-bb6b-06ce8196a523" (UID: "5dced028-fab3-448a-bb6b-06ce8196a523"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.059936 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-combined-ca-bundle\") pod \"5dced028-fab3-448a-bb6b-06ce8196a523\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.060058 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-log-httpd\") pod \"5dced028-fab3-448a-bb6b-06ce8196a523\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.060151 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-config-data\") pod \"5dced028-fab3-448a-bb6b-06ce8196a523\" (UID: \"5dced028-fab3-448a-bb6b-06ce8196a523\") " Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.060905 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5dced028-fab3-448a-bb6b-06ce8196a523" (UID: "5dced028-fab3-448a-bb6b-06ce8196a523"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.060041 4705 scope.go:117] "RemoveContainer" containerID="ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.061396 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.061419 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5dced028-fab3-448a-bb6b-06ce8196a523-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.065509 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5\": container with ID starting with ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5 not found: ID does not exist" containerID="ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.065868 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5"} err="failed to get container status \"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5\": rpc error: code = NotFound desc = could not find container \"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5\": container with ID starting with ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.065948 4705 scope.go:117] "RemoveContainer" containerID="23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.067200 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0\": container with ID starting with 23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0 not found: ID does not exist" containerID="23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.067244 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0"} err="failed to get container status \"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0\": rpc error: code = NotFound desc = could not find container \"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0\": container with ID starting with 23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.067280 4705 scope.go:117] "RemoveContainer" containerID="9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.067722 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0\": container with ID starting with 9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0 not found: ID does not exist" containerID="9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.067859 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0"} err="failed to get container status \"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0\": rpc error: code = NotFound desc = could not find container \"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0\": container with ID starting with 9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.067997 4705 scope.go:117] "RemoveContainer" containerID="c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.068540 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16\": container with ID starting with c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16 not found: ID does not exist" containerID="c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.068599 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16"} err="failed to get container status \"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16\": rpc error: code = NotFound desc = could not find container \"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16\": container with ID starting with c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.068648 4705 scope.go:117] "RemoveContainer" containerID="ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.068979 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5"} err="failed to get container status \"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5\": rpc error: code = NotFound desc = could not find container \"ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5\": container with ID starting with ecc1938d76ffe52124e27b0208044d1d1b7394a15f8d577473ede2e12cb52bb5 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.069005 4705 scope.go:117] "RemoveContainer" containerID="23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.069224 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0"} err="failed to get container status \"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0\": rpc error: code = NotFound desc = could not find container \"23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0\": container with ID starting with 23bade5489913edd0b64ba6d194ceabed7a4aa33d57bab90a833513dee5be5c0 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.069250 4705 scope.go:117] "RemoveContainer" containerID="9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.070523 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-scripts" (OuterVolumeSpecName: "scripts") pod "5dced028-fab3-448a-bb6b-06ce8196a523" (UID: "5dced028-fab3-448a-bb6b-06ce8196a523"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.070581 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0"} err="failed to get container status \"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0\": rpc error: code = NotFound desc = could not find container \"9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0\": container with ID starting with 9900e123bf792b82b48ce0c269484f317085d401b7c78f3b6d9511bdfaff92f0 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.070677 4705 scope.go:117] "RemoveContainer" containerID="c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.071049 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16"} err="failed to get container status \"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16\": rpc error: code = NotFound desc = could not find container \"c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16\": container with ID starting with c2c1bc62ab3e90be01cac688a6af11b2f2e7b674dd2d6c25415d304582717d16 not found: ID does not exist" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.083503 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dced028-fab3-448a-bb6b-06ce8196a523-kube-api-access-khbwm" (OuterVolumeSpecName: "kube-api-access-khbwm") pod "5dced028-fab3-448a-bb6b-06ce8196a523" (UID: "5dced028-fab3-448a-bb6b-06ce8196a523"). InnerVolumeSpecName "kube-api-access-khbwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.097360 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5dced028-fab3-448a-bb6b-06ce8196a523" (UID: "5dced028-fab3-448a-bb6b-06ce8196a523"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.162092 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.162120 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khbwm\" (UniqueName: \"kubernetes.io/projected/5dced028-fab3-448a-bb6b-06ce8196a523-kube-api-access-khbwm\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.162132 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.200056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dced028-fab3-448a-bb6b-06ce8196a523" (UID: "5dced028-fab3-448a-bb6b-06ce8196a523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.203542 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-config-data" (OuterVolumeSpecName: "config-data") pod "5dced028-fab3-448a-bb6b-06ce8196a523" (UID: "5dced028-fab3-448a-bb6b-06ce8196a523"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.263406 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.263705 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dced028-fab3-448a-bb6b-06ce8196a523-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.304794 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.314978 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.327936 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.328331 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="proxy-httpd" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328347 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="proxy-httpd" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.328367 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2713791-b8f1-47ae-a438-c2f5e97ef433" containerName="heat-engine" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328374 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2713791-b8f1-47ae-a438-c2f5e97ef433" containerName="heat-engine" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.328388 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="sg-core" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328394 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="sg-core" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.328403 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-notification-agent" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328409 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-notification-agent" Jan 24 08:03:30 crc kubenswrapper[4705]: E0124 08:03:30.328428 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-central-agent" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328434 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-central-agent" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328599 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2713791-b8f1-47ae-a438-c2f5e97ef433" containerName="heat-engine" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328615 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="proxy-httpd" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328669 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-central-agent" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328685 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="sg-core" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.328694 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" containerName="ceilometer-notification-agent" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.330419 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.333221 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.333380 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.354209 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.371373 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.371512 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.371607 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-log-httpd\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.371639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-scripts\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.371671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-run-httpd\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.371714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-config-data\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.371742 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdh6\" (UniqueName: \"kubernetes.io/projected/f7279054-6334-4b91-a660-77e8a7186da9-kube-api-access-vsdh6\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.473849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-log-httpd\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.473919 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-scripts\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.474156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-run-httpd\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.474274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdh6\" (UniqueName: \"kubernetes.io/projected/f7279054-6334-4b91-a660-77e8a7186da9-kube-api-access-vsdh6\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.474305 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-config-data\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.474420 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.474623 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.474731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-run-httpd\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.474853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-log-httpd\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.479852 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.485145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-scripts\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.496312 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.496831 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-config-data\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.511361 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdh6\" (UniqueName: \"kubernetes.io/projected/f7279054-6334-4b91-a660-77e8a7186da9-kube-api-access-vsdh6\") pod \"ceilometer-0\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " pod="openstack/ceilometer-0" Jan 24 08:03:30 crc kubenswrapper[4705]: I0124 08:03:30.672233 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:31 crc kubenswrapper[4705]: I0124 08:03:31.303327 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:31 crc kubenswrapper[4705]: W0124 08:03:31.313130 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7279054_6334_4b91_a660_77e8a7186da9.slice/crio-a0f74301b53b43d82c291f76cc9876b093a70fcc1daded9d39468d5359987717 WatchSource:0}: Error finding container a0f74301b53b43d82c291f76cc9876b093a70fcc1daded9d39468d5359987717: Status 404 returned error can't find the container with id a0f74301b53b43d82c291f76cc9876b093a70fcc1daded9d39468d5359987717 Jan 24 08:03:31 crc kubenswrapper[4705]: I0124 08:03:31.589445 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dced028-fab3-448a-bb6b-06ce8196a523" path="/var/lib/kubelet/pods/5dced028-fab3-448a-bb6b-06ce8196a523/volumes" Jan 24 08:03:31 crc kubenswrapper[4705]: I0124 08:03:31.975759 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerStarted","Data":"a0f74301b53b43d82c291f76cc9876b093a70fcc1daded9d39468d5359987717"} Jan 24 08:03:32 crc kubenswrapper[4705]: I0124 08:03:32.986934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerStarted","Data":"f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e"} Jan 24 08:03:32 crc kubenswrapper[4705]: I0124 08:03:32.988538 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerStarted","Data":"1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6"} Jan 24 08:03:35 crc kubenswrapper[4705]: I0124 08:03:35.011562 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerStarted","Data":"6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6"} Jan 24 08:03:39 crc kubenswrapper[4705]: I0124 08:03:39.223076 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerStarted","Data":"a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1"} Jan 24 08:03:39 crc kubenswrapper[4705]: I0124 08:03:39.224019 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:03:40 crc kubenswrapper[4705]: I0124 08:03:40.251966 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" event={"ID":"cedd21a3-3cd7-415b-b600-7510e930b2a5","Type":"ContainerStarted","Data":"215a6a02d3bb2622babeeb6ba7329154d689243e931ea91550e012fa5202613e"} Jan 24 08:03:40 crc kubenswrapper[4705]: I0124 08:03:40.282056 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" podStartSLOduration=2.263305963 podStartE2EDuration="29.282037066s" podCreationTimestamp="2026-01-24 08:03:11 +0000 UTC" firstStartedPulling="2026-01-24 08:03:12.211255114 +0000 UTC m=+1330.931128402" lastFinishedPulling="2026-01-24 08:03:39.229986227 +0000 UTC m=+1357.949859505" observedRunningTime="2026-01-24 08:03:40.275853623 +0000 UTC m=+1358.995726911" watchObservedRunningTime="2026-01-24 08:03:40.282037066 +0000 UTC m=+1359.001910354" Jan 24 08:03:40 crc kubenswrapper[4705]: I0124 08:03:40.284109 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.190514258 podStartE2EDuration="10.284102654s" podCreationTimestamp="2026-01-24 08:03:30 +0000 UTC" firstStartedPulling="2026-01-24 08:03:31.31840135 +0000 UTC m=+1350.038274628" lastFinishedPulling="2026-01-24 08:03:38.411989736 +0000 UTC m=+1357.131863024" observedRunningTime="2026-01-24 08:03:39.257769844 +0000 UTC m=+1357.977643132" watchObservedRunningTime="2026-01-24 08:03:40.284102654 +0000 UTC m=+1359.003975942" Jan 24 08:03:42 crc kubenswrapper[4705]: I0124 08:03:42.430199 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:42 crc kubenswrapper[4705]: I0124 08:03:42.430796 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-central-agent" containerID="cri-o://1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6" gracePeriod=30 Jan 24 08:03:42 crc kubenswrapper[4705]: I0124 08:03:42.431454 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="proxy-httpd" containerID="cri-o://a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1" gracePeriod=30 Jan 24 08:03:42 crc kubenswrapper[4705]: I0124 08:03:42.431520 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="sg-core" containerID="cri-o://6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6" gracePeriod=30 Jan 24 08:03:42 crc kubenswrapper[4705]: I0124 08:03:42.431563 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-notification-agent" containerID="cri-o://f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e" gracePeriod=30 Jan 24 08:03:43 crc kubenswrapper[4705]: I0124 08:03:43.315552 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7279054-6334-4b91-a660-77e8a7186da9" containerID="a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1" exitCode=0 Jan 24 08:03:43 crc kubenswrapper[4705]: I0124 08:03:43.316074 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7279054-6334-4b91-a660-77e8a7186da9" containerID="6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6" exitCode=2 Jan 24 08:03:43 crc kubenswrapper[4705]: I0124 08:03:43.315915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerDied","Data":"a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1"} Jan 24 08:03:43 crc kubenswrapper[4705]: I0124 08:03:43.316237 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerDied","Data":"6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6"} Jan 24 08:03:44 crc kubenswrapper[4705]: I0124 08:03:44.329396 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerDied","Data":"f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e"} Jan 24 08:03:44 crc kubenswrapper[4705]: I0124 08:03:44.329407 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7279054-6334-4b91-a660-77e8a7186da9" containerID="f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e" exitCode=0 Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.311709 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353140 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-sg-core-conf-yaml\") pod \"f7279054-6334-4b91-a660-77e8a7186da9\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353229 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-run-httpd\") pod \"f7279054-6334-4b91-a660-77e8a7186da9\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353274 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsdh6\" (UniqueName: \"kubernetes.io/projected/f7279054-6334-4b91-a660-77e8a7186da9-kube-api-access-vsdh6\") pod \"f7279054-6334-4b91-a660-77e8a7186da9\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353330 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-scripts\") pod \"f7279054-6334-4b91-a660-77e8a7186da9\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353366 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-config-data\") pod \"f7279054-6334-4b91-a660-77e8a7186da9\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353427 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-combined-ca-bundle\") pod \"f7279054-6334-4b91-a660-77e8a7186da9\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353478 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-log-httpd\") pod \"f7279054-6334-4b91-a660-77e8a7186da9\" (UID: \"f7279054-6334-4b91-a660-77e8a7186da9\") " Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.353723 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7279054-6334-4b91-a660-77e8a7186da9" (UID: "f7279054-6334-4b91-a660-77e8a7186da9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.354201 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7279054-6334-4b91-a660-77e8a7186da9" (UID: "f7279054-6334-4b91-a660-77e8a7186da9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.360034 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-scripts" (OuterVolumeSpecName: "scripts") pod "f7279054-6334-4b91-a660-77e8a7186da9" (UID: "f7279054-6334-4b91-a660-77e8a7186da9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.362179 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7279054-6334-4b91-a660-77e8a7186da9-kube-api-access-vsdh6" (OuterVolumeSpecName: "kube-api-access-vsdh6") pod "f7279054-6334-4b91-a660-77e8a7186da9" (UID: "f7279054-6334-4b91-a660-77e8a7186da9"). InnerVolumeSpecName "kube-api-access-vsdh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.377516 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7279054-6334-4b91-a660-77e8a7186da9" containerID="1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6" exitCode=0 Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.377563 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerDied","Data":"1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6"} Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.377593 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7279054-6334-4b91-a660-77e8a7186da9","Type":"ContainerDied","Data":"a0f74301b53b43d82c291f76cc9876b093a70fcc1daded9d39468d5359987717"} Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.377611 4705 scope.go:117] "RemoveContainer" containerID="a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.377702 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.393231 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7279054-6334-4b91-a660-77e8a7186da9" (UID: "f7279054-6334-4b91-a660-77e8a7186da9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.448740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7279054-6334-4b91-a660-77e8a7186da9" (UID: "f7279054-6334-4b91-a660-77e8a7186da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.456449 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.456490 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsdh6\" (UniqueName: \"kubernetes.io/projected/f7279054-6334-4b91-a660-77e8a7186da9-kube-api-access-vsdh6\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.456502 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.456513 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.456522 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7279054-6334-4b91-a660-77e8a7186da9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.456530 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.472233 4705 scope.go:117] "RemoveContainer" containerID="6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.495364 4705 scope.go:117] "RemoveContainer" containerID="f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.512217 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-config-data" (OuterVolumeSpecName: "config-data") pod "f7279054-6334-4b91-a660-77e8a7186da9" (UID: "f7279054-6334-4b91-a660-77e8a7186da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.516798 4705 scope.go:117] "RemoveContainer" containerID="1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.539301 4705 scope.go:117] "RemoveContainer" containerID="a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1" Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.539688 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1\": container with ID starting with a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1 not found: ID does not exist" containerID="a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.539728 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1"} err="failed to get container status \"a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1\": rpc error: code = NotFound desc = could not find container \"a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1\": container with ID starting with a6d9a82386174a0c6e5bb667c9ba63206f8614f5e5c30bf5b74bfa335d08edc1 not found: ID does not exist" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.539756 4705 scope.go:117] "RemoveContainer" containerID="6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6" Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.540002 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6\": container with ID starting with 6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6 not found: ID does not exist" containerID="6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.540034 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6"} err="failed to get container status \"6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6\": rpc error: code = NotFound desc = could not find container \"6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6\": container with ID starting with 6ad581d5f7ab7f7bbdcf79e00a14d05304f16428c6b3914be0f95b63ee3517a6 not found: ID does not exist" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.540052 4705 scope.go:117] "RemoveContainer" containerID="f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e" Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.540251 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e\": container with ID starting with f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e not found: ID does not exist" containerID="f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.540285 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e"} err="failed to get container status \"f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e\": rpc error: code = NotFound desc = could not find container \"f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e\": container with ID starting with f809039d2c9db7ab3890152d0f594e1b2f9103744685b4d50f7bd084a5d6cf9e not found: ID does not exist" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.540304 4705 scope.go:117] "RemoveContainer" containerID="1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6" Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.540573 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6\": container with ID starting with 1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6 not found: ID does not exist" containerID="1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.540618 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6"} err="failed to get container status \"1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6\": rpc error: code = NotFound desc = could not find container \"1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6\": container with ID starting with 1e45c50f7c75dd4fb27c42ff334fb268ed6d808c6bc50cc25dbe636b475f1db6 not found: ID does not exist" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.557738 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7279054-6334-4b91-a660-77e8a7186da9-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.707815 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.721593 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.731318 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.731752 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-central-agent" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.731771 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-central-agent" Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.731785 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-notification-agent" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.731791 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-notification-agent" Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.731804 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="proxy-httpd" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.731811 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="proxy-httpd" Jan 24 08:03:45 crc kubenswrapper[4705]: E0124 08:03:45.731846 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="sg-core" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.731856 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="sg-core" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.732089 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="sg-core" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.732104 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="proxy-httpd" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.732120 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-central-agent" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.732132 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7279054-6334-4b91-a660-77e8a7186da9" containerName="ceilometer-notification-agent" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.733805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.738221 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.738393 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.749651 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.863793 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxrn\" (UniqueName: \"kubernetes.io/projected/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-kube-api-access-rnxrn\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.863891 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-run-httpd\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.864070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-log-httpd\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.864271 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.864508 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-config-data\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.864561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-scripts\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.864591 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.966395 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.967054 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-config-data\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.967105 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-scripts\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.967133 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.967616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxrn\" (UniqueName: \"kubernetes.io/projected/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-kube-api-access-rnxrn\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.967678 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-run-httpd\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.967752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-log-httpd\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.968221 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-log-httpd\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.968328 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-run-httpd\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.971143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-scripts\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.971752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-config-data\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.974039 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.982218 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:45 crc kubenswrapper[4705]: I0124 08:03:45.987663 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxrn\" (UniqueName: \"kubernetes.io/projected/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-kube-api-access-rnxrn\") pod \"ceilometer-0\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " pod="openstack/ceilometer-0" Jan 24 08:03:46 crc kubenswrapper[4705]: I0124 08:03:46.071987 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:03:46 crc kubenswrapper[4705]: I0124 08:03:46.621209 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:03:47 crc kubenswrapper[4705]: I0124 08:03:47.419371 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerStarted","Data":"3bd5558be09679cfad15742e9aee7d40c90635edff87496b25a6fd0d4cd8fda3"} Jan 24 08:03:47 crc kubenswrapper[4705]: I0124 08:03:47.586583 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7279054-6334-4b91-a660-77e8a7186da9" path="/var/lib/kubelet/pods/f7279054-6334-4b91-a660-77e8a7186da9/volumes" Jan 24 08:03:49 crc kubenswrapper[4705]: I0124 08:03:49.437057 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerStarted","Data":"5545ae730b47281a17d5b339d9724aed9d54ef0ad399b7e62e79c3034046cd40"} Jan 24 08:03:50 crc kubenswrapper[4705]: I0124 08:03:50.450034 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerStarted","Data":"4f0f1152afea40871d3c6bf29f2e9a6fc26eebc48bcb29f1d2a9076d1759eca4"} Jan 24 08:03:50 crc kubenswrapper[4705]: I0124 08:03:50.450388 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerStarted","Data":"210ddcc0b0a6d2bd196c6067db234a821b417456df4847f36fc7210486703bb2"} Jan 24 08:03:52 crc kubenswrapper[4705]: I0124 08:03:52.467456 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerStarted","Data":"f1e8dfc34a8303526c07c7684d685af5b09a47a57e94469b0b066f01ee1f4898"} Jan 24 08:03:52 crc kubenswrapper[4705]: I0124 08:03:52.469053 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:03:52 crc kubenswrapper[4705]: I0124 08:03:52.497639 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.586734715 podStartE2EDuration="7.497615006s" podCreationTimestamp="2026-01-24 08:03:45 +0000 UTC" firstStartedPulling="2026-01-24 08:03:46.627800491 +0000 UTC m=+1365.347673779" lastFinishedPulling="2026-01-24 08:03:51.538680792 +0000 UTC m=+1370.258554070" observedRunningTime="2026-01-24 08:03:52.486889786 +0000 UTC m=+1371.206763084" watchObservedRunningTime="2026-01-24 08:03:52.497615006 +0000 UTC m=+1371.217488294" Jan 24 08:03:53 crc kubenswrapper[4705]: I0124 08:03:53.478517 4705 generic.go:334] "Generic (PLEG): container finished" podID="cedd21a3-3cd7-415b-b600-7510e930b2a5" containerID="215a6a02d3bb2622babeeb6ba7329154d689243e931ea91550e012fa5202613e" exitCode=0 Jan 24 08:03:53 crc kubenswrapper[4705]: I0124 08:03:53.478613 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" event={"ID":"cedd21a3-3cd7-415b-b600-7510e930b2a5","Type":"ContainerDied","Data":"215a6a02d3bb2622babeeb6ba7329154d689243e931ea91550e012fa5202613e"} Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.841181 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.941996 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-combined-ca-bundle\") pod \"cedd21a3-3cd7-415b-b600-7510e930b2a5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.942139 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf7x8\" (UniqueName: \"kubernetes.io/projected/cedd21a3-3cd7-415b-b600-7510e930b2a5-kube-api-access-gf7x8\") pod \"cedd21a3-3cd7-415b-b600-7510e930b2a5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.942245 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-config-data\") pod \"cedd21a3-3cd7-415b-b600-7510e930b2a5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.942369 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-scripts\") pod \"cedd21a3-3cd7-415b-b600-7510e930b2a5\" (UID: \"cedd21a3-3cd7-415b-b600-7510e930b2a5\") " Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.947378 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-scripts" (OuterVolumeSpecName: "scripts") pod "cedd21a3-3cd7-415b-b600-7510e930b2a5" (UID: "cedd21a3-3cd7-415b-b600-7510e930b2a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.947930 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedd21a3-3cd7-415b-b600-7510e930b2a5-kube-api-access-gf7x8" (OuterVolumeSpecName: "kube-api-access-gf7x8") pod "cedd21a3-3cd7-415b-b600-7510e930b2a5" (UID: "cedd21a3-3cd7-415b-b600-7510e930b2a5"). InnerVolumeSpecName "kube-api-access-gf7x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.970511 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-config-data" (OuterVolumeSpecName: "config-data") pod "cedd21a3-3cd7-415b-b600-7510e930b2a5" (UID: "cedd21a3-3cd7-415b-b600-7510e930b2a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:54 crc kubenswrapper[4705]: I0124 08:03:54.972731 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cedd21a3-3cd7-415b-b600-7510e930b2a5" (UID: "cedd21a3-3cd7-415b-b600-7510e930b2a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.044065 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.044103 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf7x8\" (UniqueName: \"kubernetes.io/projected/cedd21a3-3cd7-415b-b600-7510e930b2a5-kube-api-access-gf7x8\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.044117 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.044125 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedd21a3-3cd7-415b-b600-7510e930b2a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.499566 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" event={"ID":"cedd21a3-3cd7-415b-b600-7510e930b2a5","Type":"ContainerDied","Data":"429f368c68d2539fd2aa128e7bce0c9a575641704f8fc86b22be1d0707a53021"} Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.499994 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429f368c68d2539fd2aa128e7bce0c9a575641704f8fc86b22be1d0707a53021" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.499745 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8qzw5" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.620483 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 08:03:55 crc kubenswrapper[4705]: E0124 08:03:55.621029 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedd21a3-3cd7-415b-b600-7510e930b2a5" containerName="nova-cell0-conductor-db-sync" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.621050 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedd21a3-3cd7-415b-b600-7510e930b2a5" containerName="nova-cell0-conductor-db-sync" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.621214 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedd21a3-3cd7-415b-b600-7510e930b2a5" containerName="nova-cell0-conductor-db-sync" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.621903 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.624047 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.624312 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7n4w4" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.632028 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.759781 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e941386-bff0-4fd5-a452-0f659b35eae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.759957 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e941386-bff0-4fd5-a452-0f659b35eae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.759995 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwt6g\" (UniqueName: \"kubernetes.io/projected/7e941386-bff0-4fd5-a452-0f659b35eae9-kube-api-access-jwt6g\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.861915 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e941386-bff0-4fd5-a452-0f659b35eae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.861985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwt6g\" (UniqueName: \"kubernetes.io/projected/7e941386-bff0-4fd5-a452-0f659b35eae9-kube-api-access-jwt6g\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.862100 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e941386-bff0-4fd5-a452-0f659b35eae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.867221 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e941386-bff0-4fd5-a452-0f659b35eae9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.867436 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e941386-bff0-4fd5-a452-0f659b35eae9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.899352 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwt6g\" (UniqueName: \"kubernetes.io/projected/7e941386-bff0-4fd5-a452-0f659b35eae9-kube-api-access-jwt6g\") pod \"nova-cell0-conductor-0\" (UID: \"7e941386-bff0-4fd5-a452-0f659b35eae9\") " pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:55 crc kubenswrapper[4705]: I0124 08:03:55.949916 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 08:03:56 crc kubenswrapper[4705]: I0124 08:03:56.439193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 08:03:56 crc kubenswrapper[4705]: I0124 08:03:56.524935 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7e941386-bff0-4fd5-a452-0f659b35eae9","Type":"ContainerStarted","Data":"dd2e295cfbc13883416f2275d8620c3f47c5588cc705cec40c65c4485ad05e8a"} Jan 24 08:03:57 crc kubenswrapper[4705]: I0124 08:03:57.539250 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7e941386-bff0-4fd5-a452-0f659b35eae9","Type":"ContainerStarted","Data":"2032f7adc160537e44008e2894ff097399c93eb344d71482f0dec2dc5f6c9d93"} Jan 24 08:03:57 crc kubenswrapper[4705]: I0124 08:03:57.541680 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 24 08:04:05 crc kubenswrapper[4705]: I0124 08:04:05.978616 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:05.998555 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=10.998534252 podStartE2EDuration="10.998534252s" podCreationTimestamp="2026-01-24 08:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:03:57.566616108 +0000 UTC m=+1376.286489396" watchObservedRunningTime="2026-01-24 08:04:05.998534252 +0000 UTC m=+1384.718407540" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.400518 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-c96dk"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.402229 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.408340 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.408395 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.422085 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c96dk"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.469985 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-config-data\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.470116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.470143 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-scripts\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.470365 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxppw\" (UniqueName: \"kubernetes.io/projected/1666f128-9c4f-4a59-8210-bf5783b38f5f-kube-api-access-jxppw\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.571916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.572005 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-scripts\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.572091 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxppw\" (UniqueName: \"kubernetes.io/projected/1666f128-9c4f-4a59-8210-bf5783b38f5f-kube-api-access-jxppw\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.572250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-config-data\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.582734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-config-data\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.600681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.609358 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-scripts\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.621734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxppw\" (UniqueName: \"kubernetes.io/projected/1666f128-9c4f-4a59-8210-bf5783b38f5f-kube-api-access-jxppw\") pod \"nova-cell0-cell-mapping-c96dk\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.644664 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.645869 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.654564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.705710 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.707134 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.721393 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.723117 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.737456 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.770199 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.770330 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.776590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp6t2\" (UniqueName: \"kubernetes.io/projected/e6a0e216-7d83-4c64-a162-5dc580401bbb-kube-api-access-mp6t2\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.776637 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.776665 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.776686 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-config-data\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.776727 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.776788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8qm\" (UniqueName: \"kubernetes.io/projected/bd04c245-6baf-44c1-8305-792b3e6607ad-kube-api-access-mc8qm\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.776911 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.778147 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.808025 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.874000 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.876077 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880151 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8qm\" (UniqueName: \"kubernetes.io/projected/bd04c245-6baf-44c1-8305-792b3e6607ad-kube-api-access-mc8qm\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880220 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee15e305-528c-4453-a4e7-11afe6c9d348-logs\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-config-data\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880299 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvqq\" (UniqueName: \"kubernetes.io/projected/ee15e305-528c-4453-a4e7-11afe6c9d348-kube-api-access-kzvqq\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880317 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp6t2\" (UniqueName: \"kubernetes.io/projected/e6a0e216-7d83-4c64-a162-5dc580401bbb-kube-api-access-mp6t2\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880342 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880371 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880395 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-config-data\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880422 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.880457 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.891898 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.896226 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.899452 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.902700 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.903773 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.917935 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8qm\" (UniqueName: \"kubernetes.io/projected/bd04c245-6baf-44c1-8305-792b3e6607ad-kube-api-access-mc8qm\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.918206 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-config-data\") pod \"nova-scheduler-0\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.950566 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp6t2\" (UniqueName: \"kubernetes.io/projected/e6a0e216-7d83-4c64-a162-5dc580401bbb-kube-api-access-mp6t2\") pod \"nova-cell1-novncproxy-0\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.982625 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krd47\" (UniqueName: \"kubernetes.io/projected/bb9512b4-f71e-4659-9332-165fe5a86c08-kube-api-access-krd47\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.982762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee15e305-528c-4453-a4e7-11afe6c9d348-logs\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.982838 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-config-data\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.982875 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.982951 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9512b4-f71e-4659-9332-165fe5a86c08-logs\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.982977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvqq\" (UniqueName: \"kubernetes.io/projected/ee15e305-528c-4453-a4e7-11afe6c9d348-kube-api-access-kzvqq\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.983030 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-config-data\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.983064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.987705 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee15e305-528c-4453-a4e7-11afe6c9d348-logs\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.988982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.989336 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-zw4j2"] Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.990868 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:06 crc kubenswrapper[4705]: I0124 08:04:06.992849 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-config-data\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.005172 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-zw4j2"] Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.013405 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvqq\" (UniqueName: \"kubernetes.io/projected/ee15e305-528c-4453-a4e7-11afe6c9d348-kube-api-access-kzvqq\") pod \"nova-api-0\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " pod="openstack/nova-api-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.041546 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.062143 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.071738 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.071782 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.085324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.085863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd8k4\" (UniqueName: \"kubernetes.io/projected/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-kube-api-access-hd8k4\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.085903 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9512b4-f71e-4659-9332-165fe5a86c08-logs\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.085965 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-config\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.086060 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-config-data\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.086093 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.086220 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.086269 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krd47\" (UniqueName: \"kubernetes.io/projected/bb9512b4-f71e-4659-9332-165fe5a86c08-kube-api-access-krd47\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.086301 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9512b4-f71e-4659-9332-165fe5a86c08-logs\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.086424 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.086477 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.089958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.093444 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-config-data\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.109742 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.115431 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krd47\" (UniqueName: \"kubernetes.io/projected/bb9512b4-f71e-4659-9332-165fe5a86c08-kube-api-access-krd47\") pod \"nova-metadata-0\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.191064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.191405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.191456 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd8k4\" (UniqueName: \"kubernetes.io/projected/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-kube-api-access-hd8k4\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.191511 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-config\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.191559 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.191623 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.192421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.192795 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.193166 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-config\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.193386 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.207713 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.220303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd8k4\" (UniqueName: \"kubernetes.io/projected/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-kube-api-access-hd8k4\") pod \"dnsmasq-dns-568d7fd7cf-zw4j2\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.350020 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.386283 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.466136 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c96dk"] Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.678568 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c96dk" event={"ID":"1666f128-9c4f-4a59-8210-bf5783b38f5f","Type":"ContainerStarted","Data":"d1f5e722698186122036ae8b1ac59976331998a74647936918cfe8d03d276c19"} Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.787542 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4f7gt"] Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.788806 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.799941 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.800098 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.815763 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4f7gt"] Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.827140 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:07 crc kubenswrapper[4705]: W0124 08:04:07.832734 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd04c245_6baf_44c1_8305_792b3e6607ad.slice/crio-97ffb0aa0ba1d654bc39f886a7b338e8e658e1485cca5a60c2fa4692efa96d7a WatchSource:0}: Error finding container 97ffb0aa0ba1d654bc39f886a7b338e8e658e1485cca5a60c2fa4692efa96d7a: Status 404 returned error can't find the container with id 97ffb0aa0ba1d654bc39f886a7b338e8e658e1485cca5a60c2fa4692efa96d7a Jan 24 08:04:07 crc kubenswrapper[4705]: W0124 08:04:07.835034 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6a0e216_7d83_4c64_a162_5dc580401bbb.slice/crio-a63db3e8c86590229114f2bf9ecdb879f1c4a46278db93733a1c892e1e71bb1b WatchSource:0}: Error finding container a63db3e8c86590229114f2bf9ecdb879f1c4a46278db93733a1c892e1e71bb1b: Status 404 returned error can't find the container with id a63db3e8c86590229114f2bf9ecdb879f1c4a46278db93733a1c892e1e71bb1b Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.836804 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.848132 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.917194 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-scripts\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.917272 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.917384 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-config-data\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.917434 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdt6c\" (UniqueName: \"kubernetes.io/projected/1b478974-9991-4090-89ea-48d3953d822d-kube-api-access-pdt6c\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:07 crc kubenswrapper[4705]: I0124 08:04:07.984612 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:07 crc kubenswrapper[4705]: W0124 08:04:07.992467 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee15e305_528c_4453_a4e7_11afe6c9d348.slice/crio-85275b4cb5a38031f39037e36fd1faa9069d398b563da80669c097e5eb3a2a42 WatchSource:0}: Error finding container 85275b4cb5a38031f39037e36fd1faa9069d398b563da80669c097e5eb3a2a42: Status 404 returned error can't find the container with id 85275b4cb5a38031f39037e36fd1faa9069d398b563da80669c097e5eb3a2a42 Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.019136 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-scripts\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.019200 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.019300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-config-data\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.019350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdt6c\" (UniqueName: \"kubernetes.io/projected/1b478974-9991-4090-89ea-48d3953d822d-kube-api-access-pdt6c\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.025549 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.025573 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-scripts\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.025631 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-config-data\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.041198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdt6c\" (UniqueName: \"kubernetes.io/projected/1b478974-9991-4090-89ea-48d3953d822d-kube-api-access-pdt6c\") pod \"nova-cell1-conductor-db-sync-4f7gt\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.123755 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.138095 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:08 crc kubenswrapper[4705]: W0124 08:04:08.150192 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9512b4_f71e_4659_9332_165fe5a86c08.slice/crio-ddaef3c79da538f5999b2c42231cd366e74a5f6ad3982921dd477dfe86c815e0 WatchSource:0}: Error finding container ddaef3c79da538f5999b2c42231cd366e74a5f6ad3982921dd477dfe86c815e0: Status 404 returned error can't find the container with id ddaef3c79da538f5999b2c42231cd366e74a5f6ad3982921dd477dfe86c815e0 Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.151750 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-zw4j2"] Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.637102 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4f7gt"] Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.695121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" event={"ID":"1b478974-9991-4090-89ea-48d3953d822d","Type":"ContainerStarted","Data":"65ef279033c2e942f443de36f3b09fd9536b185b6a213e13fa4246899f5868b5"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.779728 4705 generic.go:334] "Generic (PLEG): container finished" podID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerID="99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6" exitCode=0 Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.779882 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" event={"ID":"9f9b02e6-c95b-4670-96d0-9b36e96eb14c","Type":"ContainerDied","Data":"99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.779960 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" event={"ID":"9f9b02e6-c95b-4670-96d0-9b36e96eb14c","Type":"ContainerStarted","Data":"32fbf42bb0b489bb52a550f66975bf0720e85e08f93b9d032fb661280475441b"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.792251 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee15e305-528c-4453-a4e7-11afe6c9d348","Type":"ContainerStarted","Data":"85275b4cb5a38031f39037e36fd1faa9069d398b563da80669c097e5eb3a2a42"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.794546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd04c245-6baf-44c1-8305-792b3e6607ad","Type":"ContainerStarted","Data":"97ffb0aa0ba1d654bc39f886a7b338e8e658e1485cca5a60c2fa4692efa96d7a"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.798979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c96dk" event={"ID":"1666f128-9c4f-4a59-8210-bf5783b38f5f","Type":"ContainerStarted","Data":"9cfaa321eb3510f5b6f3715f018d0012eae20cca35fa70c0c173345bb08e4a41"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.815540 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e6a0e216-7d83-4c64-a162-5dc580401bbb","Type":"ContainerStarted","Data":"a63db3e8c86590229114f2bf9ecdb879f1c4a46278db93733a1c892e1e71bb1b"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.822322 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb9512b4-f71e-4659-9332-165fe5a86c08","Type":"ContainerStarted","Data":"ddaef3c79da538f5999b2c42231cd366e74a5f6ad3982921dd477dfe86c815e0"} Jan 24 08:04:08 crc kubenswrapper[4705]: I0124 08:04:08.855446 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-c96dk" podStartSLOduration=2.8554241559999998 podStartE2EDuration="2.855424156s" podCreationTimestamp="2026-01-24 08:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:08.822481274 +0000 UTC m=+1387.542354562" watchObservedRunningTime="2026-01-24 08:04:08.855424156 +0000 UTC m=+1387.575297444" Jan 24 08:04:09 crc kubenswrapper[4705]: I0124 08:04:09.833077 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" event={"ID":"1b478974-9991-4090-89ea-48d3953d822d","Type":"ContainerStarted","Data":"04f6ddb6c4fb2e6a5b77925db5acab3ef76be16b6af313f9fb338900bea6ef1f"} Jan 24 08:04:09 crc kubenswrapper[4705]: I0124 08:04:09.835295 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" event={"ID":"9f9b02e6-c95b-4670-96d0-9b36e96eb14c","Type":"ContainerStarted","Data":"23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d"} Jan 24 08:04:09 crc kubenswrapper[4705]: I0124 08:04:09.835553 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:09 crc kubenswrapper[4705]: I0124 08:04:09.867654 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" podStartSLOduration=2.867636901 podStartE2EDuration="2.867636901s" podCreationTimestamp="2026-01-24 08:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:09.862351503 +0000 UTC m=+1388.582224791" watchObservedRunningTime="2026-01-24 08:04:09.867636901 +0000 UTC m=+1388.587510179" Jan 24 08:04:09 crc kubenswrapper[4705]: I0124 08:04:09.902384 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" podStartSLOduration=3.902359332 podStartE2EDuration="3.902359332s" podCreationTimestamp="2026-01-24 08:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:09.889288786 +0000 UTC m=+1388.609162074" watchObservedRunningTime="2026-01-24 08:04:09.902359332 +0000 UTC m=+1388.622232620" Jan 24 08:04:10 crc kubenswrapper[4705]: I0124 08:04:10.631291 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:10 crc kubenswrapper[4705]: I0124 08:04:10.648331 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.867375 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd04c245-6baf-44c1-8305-792b3e6607ad","Type":"ContainerStarted","Data":"4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890"} Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.869696 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e6a0e216-7d83-4c64-a162-5dc580401bbb","Type":"ContainerStarted","Data":"da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a"} Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.869752 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e6a0e216-7d83-4c64-a162-5dc580401bbb" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a" gracePeriod=30 Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.872232 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb9512b4-f71e-4659-9332-165fe5a86c08","Type":"ContainerStarted","Data":"552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb"} Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.872278 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb9512b4-f71e-4659-9332-165fe5a86c08","Type":"ContainerStarted","Data":"c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275"} Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.872347 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-log" containerID="cri-o://c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275" gracePeriod=30 Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.872367 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-metadata" containerID="cri-o://552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb" gracePeriod=30 Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.874917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee15e305-528c-4453-a4e7-11afe6c9d348","Type":"ContainerStarted","Data":"588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba"} Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.874957 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee15e305-528c-4453-a4e7-11afe6c9d348","Type":"ContainerStarted","Data":"7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8"} Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.891968 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.57997273 podStartE2EDuration="5.891947165s" podCreationTimestamp="2026-01-24 08:04:06 +0000 UTC" firstStartedPulling="2026-01-24 08:04:07.836502114 +0000 UTC m=+1386.556375402" lastFinishedPulling="2026-01-24 08:04:11.148476549 +0000 UTC m=+1389.868349837" observedRunningTime="2026-01-24 08:04:11.884720463 +0000 UTC m=+1390.604593761" watchObservedRunningTime="2026-01-24 08:04:11.891947165 +0000 UTC m=+1390.611820453" Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.915188 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.765043747 podStartE2EDuration="5.915165135s" podCreationTimestamp="2026-01-24 08:04:06 +0000 UTC" firstStartedPulling="2026-01-24 08:04:07.99651104 +0000 UTC m=+1386.716384318" lastFinishedPulling="2026-01-24 08:04:11.146632418 +0000 UTC m=+1389.866505706" observedRunningTime="2026-01-24 08:04:11.903176719 +0000 UTC m=+1390.623050027" watchObservedRunningTime="2026-01-24 08:04:11.915165135 +0000 UTC m=+1390.635038423" Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.925712 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.617742566 podStartE2EDuration="5.925693279s" podCreationTimestamp="2026-01-24 08:04:06 +0000 UTC" firstStartedPulling="2026-01-24 08:04:07.838674395 +0000 UTC m=+1386.558547683" lastFinishedPulling="2026-01-24 08:04:11.146625108 +0000 UTC m=+1389.866498396" observedRunningTime="2026-01-24 08:04:11.922666625 +0000 UTC m=+1390.642539923" watchObservedRunningTime="2026-01-24 08:04:11.925693279 +0000 UTC m=+1390.645566567" Jan 24 08:04:11 crc kubenswrapper[4705]: I0124 08:04:11.945085 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.949524637 podStartE2EDuration="5.945069381s" podCreationTimestamp="2026-01-24 08:04:06 +0000 UTC" firstStartedPulling="2026-01-24 08:04:08.154962252 +0000 UTC m=+1386.874835540" lastFinishedPulling="2026-01-24 08:04:11.150506996 +0000 UTC m=+1389.870380284" observedRunningTime="2026-01-24 08:04:11.943398644 +0000 UTC m=+1390.663271942" watchObservedRunningTime="2026-01-24 08:04:11.945069381 +0000 UTC m=+1390.664942669" Jan 24 08:04:12 crc kubenswrapper[4705]: I0124 08:04:12.046775 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:12 crc kubenswrapper[4705]: I0124 08:04:12.063413 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 08:04:12 crc kubenswrapper[4705]: I0124 08:04:12.341735 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 08:04:12 crc kubenswrapper[4705]: I0124 08:04:12.352694 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 08:04:12 crc kubenswrapper[4705]: I0124 08:04:12.901431 4705 generic.go:334] "Generic (PLEG): container finished" podID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerID="c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275" exitCode=143 Jan 24 08:04:12 crc kubenswrapper[4705]: I0124 08:04:12.902385 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb9512b4-f71e-4659-9332-165fe5a86c08","Type":"ContainerDied","Data":"c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275"} Jan 24 08:04:15 crc kubenswrapper[4705]: I0124 08:04:15.930354 4705 generic.go:334] "Generic (PLEG): container finished" podID="1b478974-9991-4090-89ea-48d3953d822d" containerID="04f6ddb6c4fb2e6a5b77925db5acab3ef76be16b6af313f9fb338900bea6ef1f" exitCode=0 Jan 24 08:04:15 crc kubenswrapper[4705]: I0124 08:04:15.930438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" event={"ID":"1b478974-9991-4090-89ea-48d3953d822d","Type":"ContainerDied","Data":"04f6ddb6c4fb2e6a5b77925db5acab3ef76be16b6af313f9fb338900bea6ef1f"} Jan 24 08:04:16 crc kubenswrapper[4705]: I0124 08:04:16.078518 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 08:04:16 crc kubenswrapper[4705]: I0124 08:04:16.940616 4705 generic.go:334] "Generic (PLEG): container finished" podID="1666f128-9c4f-4a59-8210-bf5783b38f5f" containerID="9cfaa321eb3510f5b6f3715f018d0012eae20cca35fa70c0c173345bb08e4a41" exitCode=0 Jan 24 08:04:16 crc kubenswrapper[4705]: I0124 08:04:16.940695 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c96dk" event={"ID":"1666f128-9c4f-4a59-8210-bf5783b38f5f","Type":"ContainerDied","Data":"9cfaa321eb3510f5b6f3715f018d0012eae20cca35fa70c0c173345bb08e4a41"} Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.064685 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.098189 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.110298 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.110391 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.372598 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.389983 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.459280 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-rv7zw"] Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.459578 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" podUID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerName="dnsmasq-dns" containerID="cri-o://44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3" gracePeriod=10 Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.472313 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-combined-ca-bundle\") pod \"1b478974-9991-4090-89ea-48d3953d822d\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.472400 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-scripts\") pod \"1b478974-9991-4090-89ea-48d3953d822d\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.472585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdt6c\" (UniqueName: \"kubernetes.io/projected/1b478974-9991-4090-89ea-48d3953d822d-kube-api-access-pdt6c\") pod \"1b478974-9991-4090-89ea-48d3953d822d\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.472652 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-config-data\") pod \"1b478974-9991-4090-89ea-48d3953d822d\" (UID: \"1b478974-9991-4090-89ea-48d3953d822d\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.480954 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-scripts" (OuterVolumeSpecName: "scripts") pod "1b478974-9991-4090-89ea-48d3953d822d" (UID: "1b478974-9991-4090-89ea-48d3953d822d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.481096 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b478974-9991-4090-89ea-48d3953d822d-kube-api-access-pdt6c" (OuterVolumeSpecName: "kube-api-access-pdt6c") pod "1b478974-9991-4090-89ea-48d3953d822d" (UID: "1b478974-9991-4090-89ea-48d3953d822d"). InnerVolumeSpecName "kube-api-access-pdt6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.506787 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-config-data" (OuterVolumeSpecName: "config-data") pod "1b478974-9991-4090-89ea-48d3953d822d" (UID: "1b478974-9991-4090-89ea-48d3953d822d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.521194 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b478974-9991-4090-89ea-48d3953d822d" (UID: "1b478974-9991-4090-89ea-48d3953d822d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.575155 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.575198 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.575214 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b478974-9991-4090-89ea-48d3953d822d-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.575226 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdt6c\" (UniqueName: \"kubernetes.io/projected/1b478974-9991-4090-89ea-48d3953d822d-kube-api-access-pdt6c\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.927618 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.956229 4705 generic.go:334] "Generic (PLEG): container finished" podID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerID="44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3" exitCode=0 Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.956288 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" event={"ID":"8679a97c-a310-4bec-945f-4fb2756b3ff6","Type":"ContainerDied","Data":"44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3"} Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.956318 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" event={"ID":"8679a97c-a310-4bec-945f-4fb2756b3ff6","Type":"ContainerDied","Data":"433fcd95bd882b20cbef2800cde532ca1768b690c8cfd8224c505eb45d69e5db"} Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.956344 4705 scope.go:117] "RemoveContainer" containerID="44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.956473 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-rv7zw" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.965949 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" event={"ID":"1b478974-9991-4090-89ea-48d3953d822d","Type":"ContainerDied","Data":"65ef279033c2e942f443de36f3b09fd9536b185b6a213e13fa4246899f5868b5"} Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.966016 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65ef279033c2e942f443de36f3b09fd9536b185b6a213e13fa4246899f5868b5" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.966094 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4f7gt" Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.990546 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-sb\") pod \"8679a97c-a310-4bec-945f-4fb2756b3ff6\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.990608 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-svc\") pod \"8679a97c-a310-4bec-945f-4fb2756b3ff6\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.990739 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-nb\") pod \"8679a97c-a310-4bec-945f-4fb2756b3ff6\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.990772 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7f2h\" (UniqueName: \"kubernetes.io/projected/8679a97c-a310-4bec-945f-4fb2756b3ff6-kube-api-access-j7f2h\") pod \"8679a97c-a310-4bec-945f-4fb2756b3ff6\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.990792 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-swift-storage-0\") pod \"8679a97c-a310-4bec-945f-4fb2756b3ff6\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " Jan 24 08:04:17 crc kubenswrapper[4705]: I0124 08:04:17.990905 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-config\") pod \"8679a97c-a310-4bec-945f-4fb2756b3ff6\" (UID: \"8679a97c-a310-4bec-945f-4fb2756b3ff6\") " Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.005078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8679a97c-a310-4bec-945f-4fb2756b3ff6-kube-api-access-j7f2h" (OuterVolumeSpecName: "kube-api-access-j7f2h") pod "8679a97c-a310-4bec-945f-4fb2756b3ff6" (UID: "8679a97c-a310-4bec-945f-4fb2756b3ff6"). InnerVolumeSpecName "kube-api-access-j7f2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.011890 4705 scope.go:117] "RemoveContainer" containerID="20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.035074 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.043482 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 08:04:18 crc kubenswrapper[4705]: E0124 08:04:18.043935 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerName="init" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.043953 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerName="init" Jan 24 08:04:18 crc kubenswrapper[4705]: E0124 08:04:18.043990 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerName="dnsmasq-dns" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.043997 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerName="dnsmasq-dns" Jan 24 08:04:18 crc kubenswrapper[4705]: E0124 08:04:18.044016 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b478974-9991-4090-89ea-48d3953d822d" containerName="nova-cell1-conductor-db-sync" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.044023 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b478974-9991-4090-89ea-48d3953d822d" containerName="nova-cell1-conductor-db-sync" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.044198 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b478974-9991-4090-89ea-48d3953d822d" containerName="nova-cell1-conductor-db-sync" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.044214 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8679a97c-a310-4bec-945f-4fb2756b3ff6" containerName="dnsmasq-dns" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.046412 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.050552 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.090043 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.093630 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d7a0724-6bfc-440e-958b-28313c59010d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.093732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbjrw\" (UniqueName: \"kubernetes.io/projected/0d7a0724-6bfc-440e-958b-28313c59010d-kube-api-access-lbjrw\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.094349 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d7a0724-6bfc-440e-958b-28313c59010d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.100132 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7f2h\" (UniqueName: \"kubernetes.io/projected/8679a97c-a310-4bec-945f-4fb2756b3ff6-kube-api-access-j7f2h\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.102984 4705 scope.go:117] "RemoveContainer" containerID="44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3" Jan 24 08:04:18 crc kubenswrapper[4705]: E0124 08:04:18.108154 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3\": container with ID starting with 44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3 not found: ID does not exist" containerID="44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.108195 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3"} err="failed to get container status \"44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3\": rpc error: code = NotFound desc = could not find container \"44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3\": container with ID starting with 44908a71dc8e241f15bb05f90d56774b4c5c7f20c6b0da0b2c07e51cfc63a8d3 not found: ID does not exist" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.108218 4705 scope.go:117] "RemoveContainer" containerID="20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a" Jan 24 08:04:18 crc kubenswrapper[4705]: E0124 08:04:18.116845 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a\": container with ID starting with 20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a not found: ID does not exist" containerID="20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.116888 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a"} err="failed to get container status \"20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a\": rpc error: code = NotFound desc = could not find container \"20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a\": container with ID starting with 20de91b0314a537bc89cbb390c85be966ff7e50bcca9251db12658c0b655f57a not found: ID does not exist" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.124573 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8679a97c-a310-4bec-945f-4fb2756b3ff6" (UID: "8679a97c-a310-4bec-945f-4fb2756b3ff6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.127145 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-config" (OuterVolumeSpecName: "config") pod "8679a97c-a310-4bec-945f-4fb2756b3ff6" (UID: "8679a97c-a310-4bec-945f-4fb2756b3ff6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.133325 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8679a97c-a310-4bec-945f-4fb2756b3ff6" (UID: "8679a97c-a310-4bec-945f-4fb2756b3ff6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.149697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8679a97c-a310-4bec-945f-4fb2756b3ff6" (UID: "8679a97c-a310-4bec-945f-4fb2756b3ff6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.332922 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.333253 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.344365 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8679a97c-a310-4bec-945f-4fb2756b3ff6" (UID: "8679a97c-a310-4bec-945f-4fb2756b3ff6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.346739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d7a0724-6bfc-440e-958b-28313c59010d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.346976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbjrw\" (UniqueName: \"kubernetes.io/projected/0d7a0724-6bfc-440e-958b-28313c59010d-kube-api-access-lbjrw\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.347208 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d7a0724-6bfc-440e-958b-28313c59010d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.347370 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.347395 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.347407 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.347420 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.347431 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8679a97c-a310-4bec-945f-4fb2756b3ff6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.357654 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d7a0724-6bfc-440e-958b-28313c59010d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.358014 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d7a0724-6bfc-440e-958b-28313c59010d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.373489 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbjrw\" (UniqueName: \"kubernetes.io/projected/0d7a0724-6bfc-440e-958b-28313c59010d-kube-api-access-lbjrw\") pod \"nova-cell1-conductor-0\" (UID: \"0d7a0724-6bfc-440e-958b-28313c59010d\") " pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.381632 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.741890 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-rv7zw"] Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.743352 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.761636 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-rv7zw"] Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.816255 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxppw\" (UniqueName: \"kubernetes.io/projected/1666f128-9c4f-4a59-8210-bf5783b38f5f-kube-api-access-jxppw\") pod \"1666f128-9c4f-4a59-8210-bf5783b38f5f\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.816305 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-scripts\") pod \"1666f128-9c4f-4a59-8210-bf5783b38f5f\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.816480 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-config-data\") pod \"1666f128-9c4f-4a59-8210-bf5783b38f5f\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.817543 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-combined-ca-bundle\") pod \"1666f128-9c4f-4a59-8210-bf5783b38f5f\" (UID: \"1666f128-9c4f-4a59-8210-bf5783b38f5f\") " Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.823668 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-scripts" (OuterVolumeSpecName: "scripts") pod "1666f128-9c4f-4a59-8210-bf5783b38f5f" (UID: "1666f128-9c4f-4a59-8210-bf5783b38f5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.824448 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1666f128-9c4f-4a59-8210-bf5783b38f5f-kube-api-access-jxppw" (OuterVolumeSpecName: "kube-api-access-jxppw") pod "1666f128-9c4f-4a59-8210-bf5783b38f5f" (UID: "1666f128-9c4f-4a59-8210-bf5783b38f5f"). InnerVolumeSpecName "kube-api-access-jxppw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.828859 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxppw\" (UniqueName: \"kubernetes.io/projected/1666f128-9c4f-4a59-8210-bf5783b38f5f-kube-api-access-jxppw\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.828898 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.861377 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1666f128-9c4f-4a59-8210-bf5783b38f5f" (UID: "1666f128-9c4f-4a59-8210-bf5783b38f5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.863152 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-config-data" (OuterVolumeSpecName: "config-data") pod "1666f128-9c4f-4a59-8210-bf5783b38f5f" (UID: "1666f128-9c4f-4a59-8210-bf5783b38f5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.930597 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.930637 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666f128-9c4f-4a59-8210-bf5783b38f5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.981005 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c96dk" event={"ID":"1666f128-9c4f-4a59-8210-bf5783b38f5f","Type":"ContainerDied","Data":"d1f5e722698186122036ae8b1ac59976331998a74647936918cfe8d03d276c19"} Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.981058 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f5e722698186122036ae8b1ac59976331998a74647936918cfe8d03d276c19" Jan 24 08:04:18 crc kubenswrapper[4705]: I0124 08:04:18.981024 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c96dk" Jan 24 08:04:19 crc kubenswrapper[4705]: I0124 08:04:19.029708 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 08:04:19 crc kubenswrapper[4705]: I0124 08:04:19.117190 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:19 crc kubenswrapper[4705]: I0124 08:04:19.117476 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-log" containerID="cri-o://7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8" gracePeriod=30 Jan 24 08:04:19 crc kubenswrapper[4705]: I0124 08:04:19.117572 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-api" containerID="cri-o://588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba" gracePeriod=30 Jan 24 08:04:19 crc kubenswrapper[4705]: I0124 08:04:19.292933 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:19 crc kubenswrapper[4705]: I0124 08:04:19.588713 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8679a97c-a310-4bec-945f-4fb2756b3ff6" path="/var/lib/kubelet/pods/8679a97c-a310-4bec-945f-4fb2756b3ff6/volumes" Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.027977 4705 generic.go:334] "Generic (PLEG): container finished" podID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerID="7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8" exitCode=143 Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.028059 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee15e305-528c-4453-a4e7-11afe6c9d348","Type":"ContainerDied","Data":"7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8"} Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.029566 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0d7a0724-6bfc-440e-958b-28313c59010d","Type":"ContainerStarted","Data":"43c895b4c7bbaebb9b8f0f921d19b9a725f1b56d3a2726cf9fc0d23783970c69"} Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.029603 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0d7a0724-6bfc-440e-958b-28313c59010d","Type":"ContainerStarted","Data":"a0e063665a0ca43aea7840a42038219d98f7632dbfc39ff81580d982ad4d7fe8"} Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.029678 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.029814 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bd04c245-6baf-44c1-8305-792b3e6607ad" containerName="nova-scheduler-scheduler" containerID="cri-o://4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890" gracePeriod=30 Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.053500 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.053483095 podStartE2EDuration="2.053483095s" podCreationTimestamp="2026-01-24 08:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:20.046492259 +0000 UTC m=+1398.766365547" watchObservedRunningTime="2026-01-24 08:04:20.053483095 +0000 UTC m=+1398.773356383" Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.963977 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 08:04:20 crc kubenswrapper[4705]: I0124 08:04:20.964275 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c1bb965b-b26a-4471-86ef-467dde0aea03" containerName="kube-state-metrics" containerID="cri-o://af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de" gracePeriod=30 Jan 24 08:04:21 crc kubenswrapper[4705]: I0124 08:04:21.458471 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 08:04:21 crc kubenswrapper[4705]: I0124 08:04:21.487463 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkqbz\" (UniqueName: \"kubernetes.io/projected/c1bb965b-b26a-4471-86ef-467dde0aea03-kube-api-access-mkqbz\") pod \"c1bb965b-b26a-4471-86ef-467dde0aea03\" (UID: \"c1bb965b-b26a-4471-86ef-467dde0aea03\") " Jan 24 08:04:21 crc kubenswrapper[4705]: I0124 08:04:21.521344 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1bb965b-b26a-4471-86ef-467dde0aea03-kube-api-access-mkqbz" (OuterVolumeSpecName: "kube-api-access-mkqbz") pod "c1bb965b-b26a-4471-86ef-467dde0aea03" (UID: "c1bb965b-b26a-4471-86ef-467dde0aea03"). InnerVolumeSpecName "kube-api-access-mkqbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:21 crc kubenswrapper[4705]: I0124 08:04:21.590720 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkqbz\" (UniqueName: \"kubernetes.io/projected/c1bb965b-b26a-4471-86ef-467dde0aea03-kube-api-access-mkqbz\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:22 crc kubenswrapper[4705]: E0124 08:04:22.065080 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.066513 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1bb965b-b26a-4471-86ef-467dde0aea03" containerID="af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de" exitCode=2 Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.066552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c1bb965b-b26a-4471-86ef-467dde0aea03","Type":"ContainerDied","Data":"af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de"} Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.066588 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.066630 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c1bb965b-b26a-4471-86ef-467dde0aea03","Type":"ContainerDied","Data":"37cb415cb31074999e99cfabf10051095265c15309df9761681d689e7a3f7986"} Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.066671 4705 scope.go:117] "RemoveContainer" containerID="af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de" Jan 24 08:04:22 crc kubenswrapper[4705]: E0124 08:04:22.071618 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 08:04:22 crc kubenswrapper[4705]: E0124 08:04:22.073478 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 08:04:22 crc kubenswrapper[4705]: E0124 08:04:22.073514 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bd04c245-6baf-44c1-8305-792b3e6607ad" containerName="nova-scheduler-scheduler" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.110764 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.119653 4705 scope.go:117] "RemoveContainer" containerID="af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de" Jan 24 08:04:22 crc kubenswrapper[4705]: E0124 08:04:22.120269 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de\": container with ID starting with af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de not found: ID does not exist" containerID="af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.120327 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de"} err="failed to get container status \"af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de\": rpc error: code = NotFound desc = could not find container \"af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de\": container with ID starting with af57192cc368713ed8e6ed5072d53546cb598b0c1cd654fe48a4c4356d6548de not found: ID does not exist" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.121985 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.132906 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 08:04:22 crc kubenswrapper[4705]: E0124 08:04:22.133974 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1666f128-9c4f-4a59-8210-bf5783b38f5f" containerName="nova-manage" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.134027 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1666f128-9c4f-4a59-8210-bf5783b38f5f" containerName="nova-manage" Jan 24 08:04:22 crc kubenswrapper[4705]: E0124 08:04:22.134049 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1bb965b-b26a-4471-86ef-467dde0aea03" containerName="kube-state-metrics" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.134058 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1bb965b-b26a-4471-86ef-467dde0aea03" containerName="kube-state-metrics" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.134552 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1666f128-9c4f-4a59-8210-bf5783b38f5f" containerName="nova-manage" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.134573 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1bb965b-b26a-4471-86ef-467dde0aea03" containerName="kube-state-metrics" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.136456 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.140541 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.141023 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.149140 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.210566 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.210613 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dw9m\" (UniqueName: \"kubernetes.io/projected/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-api-access-9dw9m\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.210655 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.210796 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.312940 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.313046 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.313077 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dw9m\" (UniqueName: \"kubernetes.io/projected/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-api-access-9dw9m\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.313106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.333901 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.335371 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.336053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b72fb68d-e944-4d99-b1d0-eb097c807e14-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.337511 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dw9m\" (UniqueName: \"kubernetes.io/projected/b72fb68d-e944-4d99-b1d0-eb097c807e14-kube-api-access-9dw9m\") pod \"kube-state-metrics-0\" (UID: \"b72fb68d-e944-4d99-b1d0-eb097c807e14\") " pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.472983 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 08:04:22 crc kubenswrapper[4705]: I0124 08:04:22.962595 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.077669 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b72fb68d-e944-4d99-b1d0-eb097c807e14","Type":"ContainerStarted","Data":"c707ffd839d2125f16610836fae6173df4c028f4d7c5694e8b45e7ed91b17951"} Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.082945 4705 generic.go:334] "Generic (PLEG): container finished" podID="bd04c245-6baf-44c1-8305-792b3e6607ad" containerID="4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890" exitCode=0 Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.082997 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd04c245-6baf-44c1-8305-792b3e6607ad","Type":"ContainerDied","Data":"4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890"} Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.126219 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.126566 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-central-agent" containerID="cri-o://5545ae730b47281a17d5b339d9724aed9d54ef0ad399b7e62e79c3034046cd40" gracePeriod=30 Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.126624 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-notification-agent" containerID="cri-o://210ddcc0b0a6d2bd196c6067db234a821b417456df4847f36fc7210486703bb2" gracePeriod=30 Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.126649 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="proxy-httpd" containerID="cri-o://f1e8dfc34a8303526c07c7684d685af5b09a47a57e94469b0b066f01ee1f4898" gracePeriod=30 Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.126750 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="sg-core" containerID="cri-o://4f0f1152afea40871d3c6bf29f2e9a6fc26eebc48bcb29f1d2a9076d1759eca4" gracePeriod=30 Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.567050 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.586277 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1bb965b-b26a-4471-86ef-467dde0aea03" path="/var/lib/kubelet/pods/c1bb965b-b26a-4471-86ef-467dde0aea03/volumes" Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.639213 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc8qm\" (UniqueName: \"kubernetes.io/projected/bd04c245-6baf-44c1-8305-792b3e6607ad-kube-api-access-mc8qm\") pod \"bd04c245-6baf-44c1-8305-792b3e6607ad\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.639453 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-combined-ca-bundle\") pod \"bd04c245-6baf-44c1-8305-792b3e6607ad\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.639491 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-config-data\") pod \"bd04c245-6baf-44c1-8305-792b3e6607ad\" (UID: \"bd04c245-6baf-44c1-8305-792b3e6607ad\") " Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.647057 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd04c245-6baf-44c1-8305-792b3e6607ad-kube-api-access-mc8qm" (OuterVolumeSpecName: "kube-api-access-mc8qm") pod "bd04c245-6baf-44c1-8305-792b3e6607ad" (UID: "bd04c245-6baf-44c1-8305-792b3e6607ad"). InnerVolumeSpecName "kube-api-access-mc8qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.683353 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-config-data" (OuterVolumeSpecName: "config-data") pod "bd04c245-6baf-44c1-8305-792b3e6607ad" (UID: "bd04c245-6baf-44c1-8305-792b3e6607ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.702200 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd04c245-6baf-44c1-8305-792b3e6607ad" (UID: "bd04c245-6baf-44c1-8305-792b3e6607ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.741516 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.741752 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04c245-6baf-44c1-8305-792b3e6607ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:23 crc kubenswrapper[4705]: I0124 08:04:23.741762 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc8qm\" (UniqueName: \"kubernetes.io/projected/bd04c245-6baf-44c1-8305-792b3e6607ad-kube-api-access-mc8qm\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.093606 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd04c245-6baf-44c1-8305-792b3e6607ad","Type":"ContainerDied","Data":"97ffb0aa0ba1d654bc39f886a7b338e8e658e1485cca5a60c2fa4692efa96d7a"} Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.093673 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.093746 4705 scope.go:117] "RemoveContainer" containerID="4c487cc643791b2047eac736ffd6376263c1edf74459a8d8dd0555e650ddb890" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.096983 4705 generic.go:334] "Generic (PLEG): container finished" podID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerID="f1e8dfc34a8303526c07c7684d685af5b09a47a57e94469b0b066f01ee1f4898" exitCode=0 Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.097005 4705 generic.go:334] "Generic (PLEG): container finished" podID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerID="4f0f1152afea40871d3c6bf29f2e9a6fc26eebc48bcb29f1d2a9076d1759eca4" exitCode=2 Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.097012 4705 generic.go:334] "Generic (PLEG): container finished" podID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerID="5545ae730b47281a17d5b339d9724aed9d54ef0ad399b7e62e79c3034046cd40" exitCode=0 Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.097045 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerDied","Data":"f1e8dfc34a8303526c07c7684d685af5b09a47a57e94469b0b066f01ee1f4898"} Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.097097 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerDied","Data":"4f0f1152afea40871d3c6bf29f2e9a6fc26eebc48bcb29f1d2a9076d1759eca4"} Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.097112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerDied","Data":"5545ae730b47281a17d5b339d9724aed9d54ef0ad399b7e62e79c3034046cd40"} Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.099096 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b72fb68d-e944-4d99-b1d0-eb097c807e14","Type":"ContainerStarted","Data":"0949fd2e92a4b80866f6d206260f90a5b495563f5f24128a7a800564edbd00bf"} Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.100281 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.124549 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.6488655859999999 podStartE2EDuration="2.124533242s" podCreationTimestamp="2026-01-24 08:04:22 +0000 UTC" firstStartedPulling="2026-01-24 08:04:22.971943251 +0000 UTC m=+1401.691816539" lastFinishedPulling="2026-01-24 08:04:23.447610907 +0000 UTC m=+1402.167484195" observedRunningTime="2026-01-24 08:04:24.116050235 +0000 UTC m=+1402.835923533" watchObservedRunningTime="2026-01-24 08:04:24.124533242 +0000 UTC m=+1402.844406530" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.161791 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.176693 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.189817 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:24 crc kubenswrapper[4705]: E0124 08:04:24.190438 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd04c245-6baf-44c1-8305-792b3e6607ad" containerName="nova-scheduler-scheduler" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.190468 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd04c245-6baf-44c1-8305-792b3e6607ad" containerName="nova-scheduler-scheduler" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.190734 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd04c245-6baf-44c1-8305-792b3e6607ad" containerName="nova-scheduler-scheduler" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.191701 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.194107 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.202081 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.251651 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqwvq\" (UniqueName: \"kubernetes.io/projected/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-kube-api-access-vqwvq\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.251763 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.251981 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-config-data\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.353400 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-config-data\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.353463 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqwvq\" (UniqueName: \"kubernetes.io/projected/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-kube-api-access-vqwvq\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.353554 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.359671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-config-data\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.359745 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.376941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqwvq\" (UniqueName: \"kubernetes.io/projected/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-kube-api-access-vqwvq\") pod \"nova-scheduler-0\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.508874 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:04:24 crc kubenswrapper[4705]: I0124 08:04:24.993950 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.011964 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:04:25 crc kubenswrapper[4705]: W0124 08:04:25.012349 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d23c7a5_3169_4f2c_962a_d5454cf0ae93.slice/crio-6876910ce4af3c06b63f5b8d7208955df477bd5e89160b306a3d11c80b94dfd8 WatchSource:0}: Error finding container 6876910ce4af3c06b63f5b8d7208955df477bd5e89160b306a3d11c80b94dfd8: Status 404 returned error can't find the container with id 6876910ce4af3c06b63f5b8d7208955df477bd5e89160b306a3d11c80b94dfd8 Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.069283 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-combined-ca-bundle\") pod \"ee15e305-528c-4453-a4e7-11afe6c9d348\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.069329 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee15e305-528c-4453-a4e7-11afe6c9d348-logs\") pod \"ee15e305-528c-4453-a4e7-11afe6c9d348\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.069395 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzvqq\" (UniqueName: \"kubernetes.io/projected/ee15e305-528c-4453-a4e7-11afe6c9d348-kube-api-access-kzvqq\") pod \"ee15e305-528c-4453-a4e7-11afe6c9d348\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.069444 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-config-data\") pod \"ee15e305-528c-4453-a4e7-11afe6c9d348\" (UID: \"ee15e305-528c-4453-a4e7-11afe6c9d348\") " Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.071528 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee15e305-528c-4453-a4e7-11afe6c9d348-logs" (OuterVolumeSpecName: "logs") pod "ee15e305-528c-4453-a4e7-11afe6c9d348" (UID: "ee15e305-528c-4453-a4e7-11afe6c9d348"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.074351 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee15e305-528c-4453-a4e7-11afe6c9d348-kube-api-access-kzvqq" (OuterVolumeSpecName: "kube-api-access-kzvqq") pod "ee15e305-528c-4453-a4e7-11afe6c9d348" (UID: "ee15e305-528c-4453-a4e7-11afe6c9d348"). InnerVolumeSpecName "kube-api-access-kzvqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.100117 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-config-data" (OuterVolumeSpecName: "config-data") pod "ee15e305-528c-4453-a4e7-11afe6c9d348" (UID: "ee15e305-528c-4453-a4e7-11afe6c9d348"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.109156 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee15e305-528c-4453-a4e7-11afe6c9d348" (UID: "ee15e305-528c-4453-a4e7-11afe6c9d348"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.110890 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d23c7a5-3169-4f2c-962a-d5454cf0ae93","Type":"ContainerStarted","Data":"6876910ce4af3c06b63f5b8d7208955df477bd5e89160b306a3d11c80b94dfd8"} Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.113220 4705 generic.go:334] "Generic (PLEG): container finished" podID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerID="588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba" exitCode=0 Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.113286 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee15e305-528c-4453-a4e7-11afe6c9d348","Type":"ContainerDied","Data":"588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba"} Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.113311 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee15e305-528c-4453-a4e7-11afe6c9d348","Type":"ContainerDied","Data":"85275b4cb5a38031f39037e36fd1faa9069d398b563da80669c097e5eb3a2a42"} Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.113331 4705 scope.go:117] "RemoveContainer" containerID="588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.113782 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.164594 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.165532 4705 scope.go:117] "RemoveContainer" containerID="7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.172560 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.172599 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee15e305-528c-4453-a4e7-11afe6c9d348-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.172611 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzvqq\" (UniqueName: \"kubernetes.io/projected/ee15e305-528c-4453-a4e7-11afe6c9d348-kube-api-access-kzvqq\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.172627 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15e305-528c-4453-a4e7-11afe6c9d348-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.177772 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.194083 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:25 crc kubenswrapper[4705]: E0124 08:04:25.194883 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-log" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.194915 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-log" Jan 24 08:04:25 crc kubenswrapper[4705]: E0124 08:04:25.194948 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-api" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.194958 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-api" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.195297 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-api" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.195326 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" containerName="nova-api-log" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.198421 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.201882 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.212404 4705 scope.go:117] "RemoveContainer" containerID="588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba" Jan 24 08:04:25 crc kubenswrapper[4705]: E0124 08:04:25.212957 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba\": container with ID starting with 588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba not found: ID does not exist" containerID="588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.212988 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba"} err="failed to get container status \"588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba\": rpc error: code = NotFound desc = could not find container \"588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba\": container with ID starting with 588ff27c9bfafb476295d1f671312ccd8bf9690e562a16d301e5eb15adfe79ba not found: ID does not exist" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.213014 4705 scope.go:117] "RemoveContainer" containerID="7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8" Jan 24 08:04:25 crc kubenswrapper[4705]: E0124 08:04:25.213271 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8\": container with ID starting with 7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8 not found: ID does not exist" containerID="7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.213290 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8"} err="failed to get container status \"7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8\": rpc error: code = NotFound desc = could not find container \"7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8\": container with ID starting with 7300deae6886f2a8121e8ecd52b6b1594e88930186d5eac2e737d40e3ce1a5e8 not found: ID does not exist" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.217358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.274777 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.274845 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-config-data\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.274981 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9594fbd-0ca9-4624-8fdd-d5e784827a16-logs\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.275042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjcm2\" (UniqueName: \"kubernetes.io/projected/b9594fbd-0ca9-4624-8fdd-d5e784827a16-kube-api-access-sjcm2\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.376951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjcm2\" (UniqueName: \"kubernetes.io/projected/b9594fbd-0ca9-4624-8fdd-d5e784827a16-kube-api-access-sjcm2\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.377106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.377135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-config-data\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.377266 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9594fbd-0ca9-4624-8fdd-d5e784827a16-logs\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.378127 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9594fbd-0ca9-4624-8fdd-d5e784827a16-logs\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.382721 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-config-data\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.382739 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.394599 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjcm2\" (UniqueName: \"kubernetes.io/projected/b9594fbd-0ca9-4624-8fdd-d5e784827a16-kube-api-access-sjcm2\") pod \"nova-api-0\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.523965 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.625690 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd04c245-6baf-44c1-8305-792b3e6607ad" path="/var/lib/kubelet/pods/bd04c245-6baf-44c1-8305-792b3e6607ad/volumes" Jan 24 08:04:25 crc kubenswrapper[4705]: I0124 08:04:25.626718 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee15e305-528c-4453-a4e7-11afe6c9d348" path="/var/lib/kubelet/pods/ee15e305-528c-4453-a4e7-11afe6c9d348/volumes" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.017039 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:26 crc kubenswrapper[4705]: W0124 08:04:26.025362 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9594fbd_0ca9_4624_8fdd_d5e784827a16.slice/crio-191e5ec5e05f001286bb88eb5a4b465a29907c9f1af21e591930ebc4b0be13f9 WatchSource:0}: Error finding container 191e5ec5e05f001286bb88eb5a4b465a29907c9f1af21e591930ebc4b0be13f9: Status 404 returned error can't find the container with id 191e5ec5e05f001286bb88eb5a4b465a29907c9f1af21e591930ebc4b0be13f9 Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.130814 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9594fbd-0ca9-4624-8fdd-d5e784827a16","Type":"ContainerStarted","Data":"191e5ec5e05f001286bb88eb5a4b465a29907c9f1af21e591930ebc4b0be13f9"} Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.134438 4705 generic.go:334] "Generic (PLEG): container finished" podID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerID="210ddcc0b0a6d2bd196c6067db234a821b417456df4847f36fc7210486703bb2" exitCode=0 Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.134489 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerDied","Data":"210ddcc0b0a6d2bd196c6067db234a821b417456df4847f36fc7210486703bb2"} Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.138256 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d23c7a5-3169-4f2c-962a-d5454cf0ae93","Type":"ContainerStarted","Data":"45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3"} Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.175209 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.175192294 podStartE2EDuration="2.175192294s" podCreationTimestamp="2026-01-24 08:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:26.16286932 +0000 UTC m=+1404.882742628" watchObservedRunningTime="2026-01-24 08:04:26.175192294 +0000 UTC m=+1404.895065572" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.275435 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.301608 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-sg-core-conf-yaml\") pod \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.306080 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxrn\" (UniqueName: \"kubernetes.io/projected/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-kube-api-access-rnxrn\") pod \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.306470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-log-httpd\") pod \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.306581 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-combined-ca-bundle\") pod \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.306654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-run-httpd\") pod \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.306995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" (UID: "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.307143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-scripts\") pod \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.307236 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" (UID: "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.308241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-config-data\") pod \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\" (UID: \"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42\") " Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.309089 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.309119 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.311386 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-kube-api-access-rnxrn" (OuterVolumeSpecName: "kube-api-access-rnxrn") pod "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" (UID: "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42"). InnerVolumeSpecName "kube-api-access-rnxrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.312055 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-scripts" (OuterVolumeSpecName: "scripts") pod "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" (UID: "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.366145 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" (UID: "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.411786 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.411849 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.411863 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxrn\" (UniqueName: \"kubernetes.io/projected/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-kube-api-access-rnxrn\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.417197 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" (UID: "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.450024 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-config-data" (OuterVolumeSpecName: "config-data") pod "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" (UID: "baa2d9b9-4ed7-447c-a21b-e2a1ea800e42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.514056 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:26 crc kubenswrapper[4705]: I0124 08:04:26.514121 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.148650 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9594fbd-0ca9-4624-8fdd-d5e784827a16","Type":"ContainerStarted","Data":"a37c46e016fa9e8a6746660f5880f338a3d09e2f502ebffd5be3f7b36addb510"} Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.149004 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9594fbd-0ca9-4624-8fdd-d5e784827a16","Type":"ContainerStarted","Data":"e539e2d764812fa222b6a53edac21c6009cfc753555d6da8d8fff721963f53d0"} Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.154650 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baa2d9b9-4ed7-447c-a21b-e2a1ea800e42","Type":"ContainerDied","Data":"3bd5558be09679cfad15742e9aee7d40c90635edff87496b25a6fd0d4cd8fda3"} Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.154681 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.154698 4705 scope.go:117] "RemoveContainer" containerID="f1e8dfc34a8303526c07c7684d685af5b09a47a57e94469b0b066f01ee1f4898" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.185288 4705 scope.go:117] "RemoveContainer" containerID="4f0f1152afea40871d3c6bf29f2e9a6fc26eebc48bcb29f1d2a9076d1759eca4" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.192979 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.192955634 podStartE2EDuration="2.192955634s" podCreationTimestamp="2026-01-24 08:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:27.185708331 +0000 UTC m=+1405.905581619" watchObservedRunningTime="2026-01-24 08:04:27.192955634 +0000 UTC m=+1405.912828922" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.228296 4705 scope.go:117] "RemoveContainer" containerID="210ddcc0b0a6d2bd196c6067db234a821b417456df4847f36fc7210486703bb2" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.236776 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.245862 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:27 crc kubenswrapper[4705]: E0124 08:04:27.256296 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaa2d9b9_4ed7_447c_a21b_e2a1ea800e42.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaa2d9b9_4ed7_447c_a21b_e2a1ea800e42.slice/crio-3bd5558be09679cfad15742e9aee7d40c90635edff87496b25a6fd0d4cd8fda3\": RecentStats: unable to find data in memory cache]" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.264643 4705 scope.go:117] "RemoveContainer" containerID="5545ae730b47281a17d5b339d9724aed9d54ef0ad399b7e62e79c3034046cd40" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.273631 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:27 crc kubenswrapper[4705]: E0124 08:04:27.274173 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-notification-agent" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274195 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-notification-agent" Jan 24 08:04:27 crc kubenswrapper[4705]: E0124 08:04:27.274228 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="sg-core" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274235 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="sg-core" Jan 24 08:04:27 crc kubenswrapper[4705]: E0124 08:04:27.274248 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-central-agent" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274254 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-central-agent" Jan 24 08:04:27 crc kubenswrapper[4705]: E0124 08:04:27.274269 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="proxy-httpd" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274275 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="proxy-httpd" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274440 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="sg-core" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274454 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="proxy-httpd" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274470 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-central-agent" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.274488 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" containerName="ceilometer-notification-agent" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.276363 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.278649 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.279240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.279982 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.284294 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.328874 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.328968 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-run-httpd\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.329276 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.329331 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-scripts\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.329348 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.329454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-config-data\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.329499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-log-httpd\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.329522 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hcdb\" (UniqueName: \"kubernetes.io/projected/2140c914-3a58-4bc9-8acc-68e42c74e070-kube-api-access-8hcdb\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431041 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431089 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-run-httpd\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431167 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-scripts\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431201 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-config-data\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-log-httpd\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.431302 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hcdb\" (UniqueName: \"kubernetes.io/projected/2140c914-3a58-4bc9-8acc-68e42c74e070-kube-api-access-8hcdb\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.432096 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-run-httpd\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.432224 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-log-httpd\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.436123 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.437050 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.437795 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-scripts\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.438311 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.451656 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-config-data\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.452111 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hcdb\" (UniqueName: \"kubernetes.io/projected/2140c914-3a58-4bc9-8acc-68e42c74e070-kube-api-access-8hcdb\") pod \"ceilometer-0\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " pod="openstack/ceilometer-0" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.586997 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa2d9b9-4ed7-447c-a21b-e2a1ea800e42" path="/var/lib/kubelet/pods/baa2d9b9-4ed7-447c-a21b-e2a1ea800e42/volumes" Jan 24 08:04:27 crc kubenswrapper[4705]: I0124 08:04:27.598604 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:28 crc kubenswrapper[4705]: I0124 08:04:28.039304 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:28 crc kubenswrapper[4705]: W0124 08:04:28.040158 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2140c914_3a58_4bc9_8acc_68e42c74e070.slice/crio-9bea18c517022b5c9402be63510544ac3d972739d488b7eca5931b22fa6486fd WatchSource:0}: Error finding container 9bea18c517022b5c9402be63510544ac3d972739d488b7eca5931b22fa6486fd: Status 404 returned error can't find the container with id 9bea18c517022b5c9402be63510544ac3d972739d488b7eca5931b22fa6486fd Jan 24 08:04:28 crc kubenswrapper[4705]: I0124 08:04:28.168910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerStarted","Data":"9bea18c517022b5c9402be63510544ac3d972739d488b7eca5931b22fa6486fd"} Jan 24 08:04:28 crc kubenswrapper[4705]: I0124 08:04:28.409024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 24 08:04:29 crc kubenswrapper[4705]: I0124 08:04:29.183624 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerStarted","Data":"197022bc923cce9288fb5b3047e74c667606bf59620a026178e55fa76bd391ed"} Jan 24 08:04:29 crc kubenswrapper[4705]: I0124 08:04:29.509767 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 08:04:30 crc kubenswrapper[4705]: I0124 08:04:30.194643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerStarted","Data":"ee7c3c5a2f5c53dbbcf561ce286ed7fc693147ddf8afb9fd0a60aaf1ee72c279"} Jan 24 08:04:31 crc kubenswrapper[4705]: I0124 08:04:31.204419 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerStarted","Data":"612e047f731d812b61e04523c320cc4c719f10e698191086741c4273ed3194b3"} Jan 24 08:04:32 crc kubenswrapper[4705]: I0124 08:04:32.222433 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerStarted","Data":"f4bd8b628c70c166487711a13f0b5559860251cb6a2776d909848d6d4347c965"} Jan 24 08:04:32 crc kubenswrapper[4705]: I0124 08:04:32.223893 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:04:32 crc kubenswrapper[4705]: I0124 08:04:32.251105 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.679020054 podStartE2EDuration="5.251087524s" podCreationTimestamp="2026-01-24 08:04:27 +0000 UTC" firstStartedPulling="2026-01-24 08:04:28.043059254 +0000 UTC m=+1406.762932542" lastFinishedPulling="2026-01-24 08:04:31.615126734 +0000 UTC m=+1410.335000012" observedRunningTime="2026-01-24 08:04:32.246759773 +0000 UTC m=+1410.966633061" watchObservedRunningTime="2026-01-24 08:04:32.251087524 +0000 UTC m=+1410.970960812" Jan 24 08:04:32 crc kubenswrapper[4705]: I0124 08:04:32.489929 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 08:04:34 crc kubenswrapper[4705]: I0124 08:04:34.509785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 08:04:34 crc kubenswrapper[4705]: I0124 08:04:34.536936 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 08:04:35 crc kubenswrapper[4705]: I0124 08:04:35.526586 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 08:04:35 crc kubenswrapper[4705]: I0124 08:04:35.526664 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 08:04:36 crc kubenswrapper[4705]: I0124 08:04:36.565217 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 08:04:36 crc kubenswrapper[4705]: I0124 08:04:36.565217 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 08:04:36 crc kubenswrapper[4705]: E0124 08:04:36.683663 4705 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.109s" Jan 24 08:04:36 crc kubenswrapper[4705]: I0124 08:04:36.771455 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 08:04:37 crc kubenswrapper[4705]: I0124 08:04:37.071232 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:04:37 crc kubenswrapper[4705]: I0124 08:04:37.071365 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.401917 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.410191 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.552362 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krd47\" (UniqueName: \"kubernetes.io/projected/bb9512b4-f71e-4659-9332-165fe5a86c08-kube-api-access-krd47\") pod \"bb9512b4-f71e-4659-9332-165fe5a86c08\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.552600 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9512b4-f71e-4659-9332-165fe5a86c08-logs\") pod \"bb9512b4-f71e-4659-9332-165fe5a86c08\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.552719 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-combined-ca-bundle\") pod \"e6a0e216-7d83-4c64-a162-5dc580401bbb\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.552806 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-combined-ca-bundle\") pod \"bb9512b4-f71e-4659-9332-165fe5a86c08\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.552952 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-config-data\") pod \"bb9512b4-f71e-4659-9332-165fe5a86c08\" (UID: \"bb9512b4-f71e-4659-9332-165fe5a86c08\") " Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.553002 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp6t2\" (UniqueName: \"kubernetes.io/projected/e6a0e216-7d83-4c64-a162-5dc580401bbb-kube-api-access-mp6t2\") pod \"e6a0e216-7d83-4c64-a162-5dc580401bbb\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.553075 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9512b4-f71e-4659-9332-165fe5a86c08-logs" (OuterVolumeSpecName: "logs") pod "bb9512b4-f71e-4659-9332-165fe5a86c08" (UID: "bb9512b4-f71e-4659-9332-165fe5a86c08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.553118 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-config-data\") pod \"e6a0e216-7d83-4c64-a162-5dc580401bbb\" (UID: \"e6a0e216-7d83-4c64-a162-5dc580401bbb\") " Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.553710 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9512b4-f71e-4659-9332-165fe5a86c08-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.558235 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9512b4-f71e-4659-9332-165fe5a86c08-kube-api-access-krd47" (OuterVolumeSpecName: "kube-api-access-krd47") pod "bb9512b4-f71e-4659-9332-165fe5a86c08" (UID: "bb9512b4-f71e-4659-9332-165fe5a86c08"). InnerVolumeSpecName "kube-api-access-krd47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.558695 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a0e216-7d83-4c64-a162-5dc580401bbb-kube-api-access-mp6t2" (OuterVolumeSpecName: "kube-api-access-mp6t2") pod "e6a0e216-7d83-4c64-a162-5dc580401bbb" (UID: "e6a0e216-7d83-4c64-a162-5dc580401bbb"). InnerVolumeSpecName "kube-api-access-mp6t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.581306 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-config-data" (OuterVolumeSpecName: "config-data") pod "e6a0e216-7d83-4c64-a162-5dc580401bbb" (UID: "e6a0e216-7d83-4c64-a162-5dc580401bbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.583748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6a0e216-7d83-4c64-a162-5dc580401bbb" (UID: "e6a0e216-7d83-4c64-a162-5dc580401bbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.583902 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-config-data" (OuterVolumeSpecName: "config-data") pod "bb9512b4-f71e-4659-9332-165fe5a86c08" (UID: "bb9512b4-f71e-4659-9332-165fe5a86c08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.584364 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb9512b4-f71e-4659-9332-165fe5a86c08" (UID: "bb9512b4-f71e-4659-9332-165fe5a86c08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.654974 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.655016 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.655029 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9512b4-f71e-4659-9332-165fe5a86c08-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.655042 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp6t2\" (UniqueName: \"kubernetes.io/projected/e6a0e216-7d83-4c64-a162-5dc580401bbb-kube-api-access-mp6t2\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.655055 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a0e216-7d83-4c64-a162-5dc580401bbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.655065 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krd47\" (UniqueName: \"kubernetes.io/projected/bb9512b4-f71e-4659-9332-165fe5a86c08-kube-api-access-krd47\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.740196 4705 generic.go:334] "Generic (PLEG): container finished" podID="e6a0e216-7d83-4c64-a162-5dc580401bbb" containerID="da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a" exitCode=137 Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.740253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e6a0e216-7d83-4c64-a162-5dc580401bbb","Type":"ContainerDied","Data":"da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a"} Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.740283 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e6a0e216-7d83-4c64-a162-5dc580401bbb","Type":"ContainerDied","Data":"a63db3e8c86590229114f2bf9ecdb879f1c4a46278db93733a1c892e1e71bb1b"} Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.740301 4705 scope.go:117] "RemoveContainer" containerID="da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.740412 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.746881 4705 generic.go:334] "Generic (PLEG): container finished" podID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerID="552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb" exitCode=137 Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.746955 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.746975 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb9512b4-f71e-4659-9332-165fe5a86c08","Type":"ContainerDied","Data":"552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb"} Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.747379 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb9512b4-f71e-4659-9332-165fe5a86c08","Type":"ContainerDied","Data":"ddaef3c79da538f5999b2c42231cd366e74a5f6ad3982921dd477dfe86c815e0"} Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.776931 4705 scope.go:117] "RemoveContainer" containerID="da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a" Jan 24 08:04:42 crc kubenswrapper[4705]: E0124 08:04:42.777514 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a\": container with ID starting with da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a not found: ID does not exist" containerID="da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.777557 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a"} err="failed to get container status \"da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a\": rpc error: code = NotFound desc = could not find container \"da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a\": container with ID starting with da65891b67687b6e1547984fa9afc5495c427b86cb5700269dcdcc398fb68e8a not found: ID does not exist" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.777589 4705 scope.go:117] "RemoveContainer" containerID="552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.789017 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.814574 4705 scope.go:117] "RemoveContainer" containerID="c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.820019 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.836659 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.853426 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.853676 4705 scope.go:117] "RemoveContainer" containerID="552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb" Jan 24 08:04:42 crc kubenswrapper[4705]: E0124 08:04:42.856721 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb\": container with ID starting with 552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb not found: ID does not exist" containerID="552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.856930 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb"} err="failed to get container status \"552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb\": rpc error: code = NotFound desc = could not find container \"552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb\": container with ID starting with 552449c158a2b7fcb620413875103cf58c7f3655b7f2503ae32c1806a4f147fb not found: ID does not exist" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.857041 4705 scope.go:117] "RemoveContainer" containerID="c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275" Jan 24 08:04:42 crc kubenswrapper[4705]: E0124 08:04:42.860573 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275\": container with ID starting with c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275 not found: ID does not exist" containerID="c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.860624 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275"} err="failed to get container status \"c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275\": rpc error: code = NotFound desc = could not find container \"c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275\": container with ID starting with c5a057c22aae9c8775edf909f5093e8304ed21c172a2d45bc20089812bde2275 not found: ID does not exist" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.865011 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: E0124 08:04:42.865475 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a0e216-7d83-4c64-a162-5dc580401bbb" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.865489 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a0e216-7d83-4c64-a162-5dc580401bbb" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 08:04:42 crc kubenswrapper[4705]: E0124 08:04:42.865514 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-log" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.865524 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-log" Jan 24 08:04:42 crc kubenswrapper[4705]: E0124 08:04:42.865557 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-metadata" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.865567 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-metadata" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.865815 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-metadata" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.865870 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a0e216-7d83-4c64-a162-5dc580401bbb" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.865883 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" containerName="nova-metadata-log" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.866578 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.868339 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.869933 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.870359 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.875972 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.891779 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.894958 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.905106 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.908329 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 08:04:42 crc kubenswrapper[4705]: I0124 08:04:42.908710 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.064163 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.064980 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45lmm\" (UniqueName: \"kubernetes.io/projected/90e3deeb-1218-4c9b-9e33-3e720ca605bc-kube-api-access-45lmm\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-config-data\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065119 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrb84\" (UniqueName: \"kubernetes.io/projected/44f58822-0740-4534-a8cb-79bf85a8c431-kube-api-access-hrb84\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065567 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065637 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065699 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.065774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f58822-0740-4534-a8cb-79bf85a8c431-logs\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167366 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167434 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167462 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167499 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f58822-0740-4534-a8cb-79bf85a8c431-logs\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167556 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167587 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45lmm\" (UniqueName: \"kubernetes.io/projected/90e3deeb-1218-4c9b-9e33-3e720ca605bc-kube-api-access-45lmm\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-config-data\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167628 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167654 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.167720 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrb84\" (UniqueName: \"kubernetes.io/projected/44f58822-0740-4534-a8cb-79bf85a8c431-kube-api-access-hrb84\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.168223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f58822-0740-4534-a8cb-79bf85a8c431-logs\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.172364 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.173728 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.174241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.177417 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.180444 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.181785 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-config-data\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.185602 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e3deeb-1218-4c9b-9e33-3e720ca605bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.186385 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45lmm\" (UniqueName: \"kubernetes.io/projected/90e3deeb-1218-4c9b-9e33-3e720ca605bc-kube-api-access-45lmm\") pod \"nova-cell1-novncproxy-0\" (UID: \"90e3deeb-1218-4c9b-9e33-3e720ca605bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.193042 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrb84\" (UniqueName: \"kubernetes.io/projected/44f58822-0740-4534-a8cb-79bf85a8c431-kube-api-access-hrb84\") pod \"nova-metadata-0\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.210161 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.228709 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.596993 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9512b4-f71e-4659-9332-165fe5a86c08" path="/var/lib/kubelet/pods/bb9512b4-f71e-4659-9332-165fe5a86c08/volumes" Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.598225 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a0e216-7d83-4c64-a162-5dc580401bbb" path="/var/lib/kubelet/pods/e6a0e216-7d83-4c64-a162-5dc580401bbb/volumes" Jan 24 08:04:43 crc kubenswrapper[4705]: W0124 08:04:43.658381 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90e3deeb_1218_4c9b_9e33_3e720ca605bc.slice/crio-e09698127ed5e9cce62fd074af6228fbcff6eca9c889b6f6e5bd6177ea973d19 WatchSource:0}: Error finding container e09698127ed5e9cce62fd074af6228fbcff6eca9c889b6f6e5bd6177ea973d19: Status 404 returned error can't find the container with id e09698127ed5e9cce62fd074af6228fbcff6eca9c889b6f6e5bd6177ea973d19 Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.671583 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.738670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.776517 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44f58822-0740-4534-a8cb-79bf85a8c431","Type":"ContainerStarted","Data":"3e8fbf62c457e17b14238fbf88d27f9652ad194c54055964d1b8f73de2e8fa50"} Jan 24 08:04:43 crc kubenswrapper[4705]: I0124 08:04:43.780021 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"90e3deeb-1218-4c9b-9e33-3e720ca605bc","Type":"ContainerStarted","Data":"e09698127ed5e9cce62fd074af6228fbcff6eca9c889b6f6e5bd6177ea973d19"} Jan 24 08:04:44 crc kubenswrapper[4705]: I0124 08:04:44.788917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"90e3deeb-1218-4c9b-9e33-3e720ca605bc","Type":"ContainerStarted","Data":"b0e01c37591b01625b1bdcfc157da5e89d1cd5038b2d0850b0310a12edcdbd18"} Jan 24 08:04:44 crc kubenswrapper[4705]: I0124 08:04:44.792091 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44f58822-0740-4534-a8cb-79bf85a8c431","Type":"ContainerStarted","Data":"8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972"} Jan 24 08:04:44 crc kubenswrapper[4705]: I0124 08:04:44.792132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44f58822-0740-4534-a8cb-79bf85a8c431","Type":"ContainerStarted","Data":"219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835"} Jan 24 08:04:44 crc kubenswrapper[4705]: I0124 08:04:44.812548 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.812525759 podStartE2EDuration="2.812525759s" podCreationTimestamp="2026-01-24 08:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:44.806589403 +0000 UTC m=+1423.526462701" watchObservedRunningTime="2026-01-24 08:04:44.812525759 +0000 UTC m=+1423.532399047" Jan 24 08:04:44 crc kubenswrapper[4705]: I0124 08:04:44.835696 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.835675747 podStartE2EDuration="2.835675747s" podCreationTimestamp="2026-01-24 08:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:44.831302394 +0000 UTC m=+1423.551175692" watchObservedRunningTime="2026-01-24 08:04:44.835675747 +0000 UTC m=+1423.555549035" Jan 24 08:04:45 crc kubenswrapper[4705]: I0124 08:04:45.528291 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 08:04:45 crc kubenswrapper[4705]: I0124 08:04:45.529464 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 08:04:45 crc kubenswrapper[4705]: I0124 08:04:45.529642 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 08:04:45 crc kubenswrapper[4705]: I0124 08:04:45.537256 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 08:04:45 crc kubenswrapper[4705]: I0124 08:04:45.800007 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 08:04:45 crc kubenswrapper[4705]: I0124 08:04:45.803231 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.141724 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-f8dq6"] Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.144441 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.180414 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.180921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.180993 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.181093 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.181180 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-config\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.181243 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94nw5\" (UniqueName: \"kubernetes.io/projected/621f4cfa-d068-40f9-8a4e-573c5271f499-kube-api-access-94nw5\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.197876 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-f8dq6"] Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.283985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-config\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.284062 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94nw5\" (UniqueName: \"kubernetes.io/projected/621f4cfa-d068-40f9-8a4e-573c5271f499-kube-api-access-94nw5\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.284109 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.284165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.284204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.284258 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.285395 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.285993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-config\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.286687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.287037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.287973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.317658 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94nw5\" (UniqueName: \"kubernetes.io/projected/621f4cfa-d068-40f9-8a4e-573c5271f499-kube-api-access-94nw5\") pod \"dnsmasq-dns-f84f9ccf-f8dq6\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:46 crc kubenswrapper[4705]: I0124 08:04:46.496353 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:47 crc kubenswrapper[4705]: I0124 08:04:47.149145 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-f8dq6"] Jan 24 08:04:47 crc kubenswrapper[4705]: I0124 08:04:47.845788 4705 generic.go:334] "Generic (PLEG): container finished" podID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerID="0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633" exitCode=0 Jan 24 08:04:47 crc kubenswrapper[4705]: I0124 08:04:47.846027 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" event={"ID":"621f4cfa-d068-40f9-8a4e-573c5271f499","Type":"ContainerDied","Data":"0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633"} Jan 24 08:04:47 crc kubenswrapper[4705]: I0124 08:04:47.846305 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" event={"ID":"621f4cfa-d068-40f9-8a4e-573c5271f499","Type":"ContainerStarted","Data":"cbd390c6f4dd3dc28dcc4262934a526ba5f1b8976758b8ee40a2b605f52becbe"} Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.211631 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.229701 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.230122 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.444130 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.444700 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-central-agent" containerID="cri-o://197022bc923cce9288fb5b3047e74c667606bf59620a026178e55fa76bd391ed" gracePeriod=30 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.444993 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="sg-core" containerID="cri-o://612e047f731d812b61e04523c320cc4c719f10e698191086741c4273ed3194b3" gracePeriod=30 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.445029 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="proxy-httpd" containerID="cri-o://f4bd8b628c70c166487711a13f0b5559860251cb6a2776d909848d6d4347c965" gracePeriod=30 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.445062 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-notification-agent" containerID="cri-o://ee7c3c5a2f5c53dbbcf561ce286ed7fc693147ddf8afb9fd0a60aaf1ee72c279" gracePeriod=30 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.460595 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.210:3000/\": EOF" Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.692442 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.857734 4705 generic.go:334] "Generic (PLEG): container finished" podID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerID="f4bd8b628c70c166487711a13f0b5559860251cb6a2776d909848d6d4347c965" exitCode=0 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.857776 4705 generic.go:334] "Generic (PLEG): container finished" podID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerID="612e047f731d812b61e04523c320cc4c719f10e698191086741c4273ed3194b3" exitCode=2 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.857807 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerDied","Data":"f4bd8b628c70c166487711a13f0b5559860251cb6a2776d909848d6d4347c965"} Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.857876 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerDied","Data":"612e047f731d812b61e04523c320cc4c719f10e698191086741c4273ed3194b3"} Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.860186 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" event={"ID":"621f4cfa-d068-40f9-8a4e-573c5271f499","Type":"ContainerStarted","Data":"dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95"} Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.860327 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-log" containerID="cri-o://e539e2d764812fa222b6a53edac21c6009cfc753555d6da8d8fff721963f53d0" gracePeriod=30 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.860428 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-api" containerID="cri-o://a37c46e016fa9e8a6746660f5880f338a3d09e2f502ebffd5be3f7b36addb510" gracePeriod=30 Jan 24 08:04:48 crc kubenswrapper[4705]: I0124 08:04:48.903081 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" podStartSLOduration=2.903056303 podStartE2EDuration="2.903056303s" podCreationTimestamp="2026-01-24 08:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:48.893545755 +0000 UTC m=+1427.613419053" watchObservedRunningTime="2026-01-24 08:04:48.903056303 +0000 UTC m=+1427.622929591" Jan 24 08:04:49 crc kubenswrapper[4705]: I0124 08:04:49.872078 4705 generic.go:334] "Generic (PLEG): container finished" podID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerID="e539e2d764812fa222b6a53edac21c6009cfc753555d6da8d8fff721963f53d0" exitCode=143 Jan 24 08:04:49 crc kubenswrapper[4705]: I0124 08:04:49.872162 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9594fbd-0ca9-4624-8fdd-d5e784827a16","Type":"ContainerDied","Data":"e539e2d764812fa222b6a53edac21c6009cfc753555d6da8d8fff721963f53d0"} Jan 24 08:04:49 crc kubenswrapper[4705]: I0124 08:04:49.879441 4705 generic.go:334] "Generic (PLEG): container finished" podID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerID="197022bc923cce9288fb5b3047e74c667606bf59620a026178e55fa76bd391ed" exitCode=0 Jan 24 08:04:49 crc kubenswrapper[4705]: I0124 08:04:49.879524 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerDied","Data":"197022bc923cce9288fb5b3047e74c667606bf59620a026178e55fa76bd391ed"} Jan 24 08:04:49 crc kubenswrapper[4705]: I0124 08:04:49.879976 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:50 crc kubenswrapper[4705]: I0124 08:04:50.895638 4705 generic.go:334] "Generic (PLEG): container finished" podID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerID="ee7c3c5a2f5c53dbbcf561ce286ed7fc693147ddf8afb9fd0a60aaf1ee72c279" exitCode=0 Jan 24 08:04:50 crc kubenswrapper[4705]: I0124 08:04:50.895689 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerDied","Data":"ee7c3c5a2f5c53dbbcf561ce286ed7fc693147ddf8afb9fd0a60aaf1ee72c279"} Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.067320 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101038 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-config-data\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101115 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hcdb\" (UniqueName: \"kubernetes.io/projected/2140c914-3a58-4bc9-8acc-68e42c74e070-kube-api-access-8hcdb\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101189 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-log-httpd\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101315 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-scripts\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101405 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-run-httpd\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101447 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-ceilometer-tls-certs\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-combined-ca-bundle\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101492 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-sg-core-conf-yaml\") pod \"2140c914-3a58-4bc9-8acc-68e42c74e070\" (UID: \"2140c914-3a58-4bc9-8acc-68e42c74e070\") " Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.101981 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.102162 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.102185 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2140c914-3a58-4bc9-8acc-68e42c74e070-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.118766 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2140c914-3a58-4bc9-8acc-68e42c74e070-kube-api-access-8hcdb" (OuterVolumeSpecName: "kube-api-access-8hcdb") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "kube-api-access-8hcdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.129893 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-scripts" (OuterVolumeSpecName: "scripts") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.151959 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.157896 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.204463 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hcdb\" (UniqueName: \"kubernetes.io/projected/2140c914-3a58-4bc9-8acc-68e42c74e070-kube-api-access-8hcdb\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.204496 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.204506 4705 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.204516 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.295601 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.299518 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-config-data" (OuterVolumeSpecName: "config-data") pod "2140c914-3a58-4bc9-8acc-68e42c74e070" (UID: "2140c914-3a58-4bc9-8acc-68e42c74e070"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.306425 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.306460 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2140c914-3a58-4bc9-8acc-68e42c74e070-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.909749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2140c914-3a58-4bc9-8acc-68e42c74e070","Type":"ContainerDied","Data":"9bea18c517022b5c9402be63510544ac3d972739d488b7eca5931b22fa6486fd"} Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.909857 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.910147 4705 scope.go:117] "RemoveContainer" containerID="f4bd8b628c70c166487711a13f0b5559860251cb6a2776d909848d6d4347c965" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.939040 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.945948 4705 scope.go:117] "RemoveContainer" containerID="612e047f731d812b61e04523c320cc4c719f10e698191086741c4273ed3194b3" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.949700 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.972760 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:51 crc kubenswrapper[4705]: E0124 08:04:51.973275 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-central-agent" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973321 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-central-agent" Jan 24 08:04:51 crc kubenswrapper[4705]: E0124 08:04:51.973352 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="sg-core" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973363 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="sg-core" Jan 24 08:04:51 crc kubenswrapper[4705]: E0124 08:04:51.973388 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-notification-agent" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973396 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-notification-agent" Jan 24 08:04:51 crc kubenswrapper[4705]: E0124 08:04:51.973419 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="proxy-httpd" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973426 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="proxy-httpd" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973608 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="proxy-httpd" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973626 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-notification-agent" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973637 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="sg-core" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.973647 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" containerName="ceilometer-central-agent" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.978920 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.979962 4705 scope.go:117] "RemoveContainer" containerID="ee7c3c5a2f5c53dbbcf561ce286ed7fc693147ddf8afb9fd0a60aaf1ee72c279" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.982369 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.982556 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:04:51 crc kubenswrapper[4705]: I0124 08:04:51.982695 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.011531 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.016036 4705 scope.go:117] "RemoveContainer" containerID="197022bc923cce9288fb5b3047e74c667606bf59620a026178e55fa76bd391ed" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.019328 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-scripts\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.019384 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.019486 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-log-httpd\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.019532 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-run-httpd\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.019553 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.019589 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.019895 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-config-data\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.020018 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spvvc\" (UniqueName: \"kubernetes.io/projected/c321cd32-3e43-4eb3-aa7a-b5e67978d976-kube-api-access-spvvc\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122372 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-log-httpd\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-run-httpd\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122452 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-config-data\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122620 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spvvc\" (UniqueName: \"kubernetes.io/projected/c321cd32-3e43-4eb3-aa7a-b5e67978d976-kube-api-access-spvvc\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122670 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-scripts\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.122698 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.123100 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-log-httpd\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.123884 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-run-httpd\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.127069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.127337 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-scripts\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.128020 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.128151 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-config-data\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.128880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.140327 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spvvc\" (UniqueName: \"kubernetes.io/projected/c321cd32-3e43-4eb3-aa7a-b5e67978d976-kube-api-access-spvvc\") pod \"ceilometer-0\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " pod="openstack/ceilometer-0" Jan 24 08:04:52 crc kubenswrapper[4705]: I0124 08:04:52.317986 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:52.919524 4705 generic.go:334] "Generic (PLEG): container finished" podID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerID="a37c46e016fa9e8a6746660f5880f338a3d09e2f502ebffd5be3f7b36addb510" exitCode=0 Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:52.919875 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9594fbd-0ca9-4624-8fdd-d5e784827a16","Type":"ContainerDied","Data":"a37c46e016fa9e8a6746660f5880f338a3d09e2f502ebffd5be3f7b36addb510"} Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.213880 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.230154 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.230209 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.232142 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.387213 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.586579 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2140c914-3a58-4bc9-8acc-68e42c74e070" path="/var/lib/kubelet/pods/2140c914-3a58-4bc9-8acc-68e42c74e070/volumes" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.684250 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.828511 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-combined-ca-bundle\") pod \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.828567 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjcm2\" (UniqueName: \"kubernetes.io/projected/b9594fbd-0ca9-4624-8fdd-d5e784827a16-kube-api-access-sjcm2\") pod \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.828616 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-config-data\") pod \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.828925 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9594fbd-0ca9-4624-8fdd-d5e784827a16-logs\") pod \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\" (UID: \"b9594fbd-0ca9-4624-8fdd-d5e784827a16\") " Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.829804 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9594fbd-0ca9-4624-8fdd-d5e784827a16-logs" (OuterVolumeSpecName: "logs") pod "b9594fbd-0ca9-4624-8fdd-d5e784827a16" (UID: "b9594fbd-0ca9-4624-8fdd-d5e784827a16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.848626 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9594fbd-0ca9-4624-8fdd-d5e784827a16-kube-api-access-sjcm2" (OuterVolumeSpecName: "kube-api-access-sjcm2") pod "b9594fbd-0ca9-4624-8fdd-d5e784827a16" (UID: "b9594fbd-0ca9-4624-8fdd-d5e784827a16"). InnerVolumeSpecName "kube-api-access-sjcm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.876767 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9594fbd-0ca9-4624-8fdd-d5e784827a16" (UID: "b9594fbd-0ca9-4624-8fdd-d5e784827a16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.883586 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-config-data" (OuterVolumeSpecName: "config-data") pod "b9594fbd-0ca9-4624-8fdd-d5e784827a16" (UID: "b9594fbd-0ca9-4624-8fdd-d5e784827a16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.937587 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.937636 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjcm2\" (UniqueName: \"kubernetes.io/projected/b9594fbd-0ca9-4624-8fdd-d5e784827a16-kube-api-access-sjcm2\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.937661 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9594fbd-0ca9-4624-8fdd-d5e784827a16-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.937674 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9594fbd-0ca9-4624-8fdd-d5e784827a16-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.981400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9594fbd-0ca9-4624-8fdd-d5e784827a16","Type":"ContainerDied","Data":"191e5ec5e05f001286bb88eb5a4b465a29907c9f1af21e591930ebc4b0be13f9"} Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.981474 4705 scope.go:117] "RemoveContainer" containerID="a37c46e016fa9e8a6746660f5880f338a3d09e2f502ebffd5be3f7b36addb510" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.981669 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:53 crc kubenswrapper[4705]: I0124 08:04:53.990896 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerStarted","Data":"a3dd6fd67a0e486566ad7c61ae953da9e58e70e6ce090da208c53509752b3b4e"} Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.029051 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.061619 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.064444 4705 scope.go:117] "RemoveContainer" containerID="e539e2d764812fa222b6a53edac21c6009cfc753555d6da8d8fff721963f53d0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.090410 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.101002 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:54 crc kubenswrapper[4705]: E0124 08:04:54.101524 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-log" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.101545 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-log" Jan 24 08:04:54 crc kubenswrapper[4705]: E0124 08:04:54.101579 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-api" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.101586 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-api" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.101840 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-api" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.101866 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" containerName="nova-api-log" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.103073 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.105430 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.105756 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.105983 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.113959 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.243754 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.244414 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.263254 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-internal-tls-certs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.263380 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/711b3a62-e6dd-45f1-a414-b6df50bbe569-logs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.263427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-public-tls-certs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.263497 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l842s\" (UniqueName: \"kubernetes.io/projected/711b3a62-e6dd-45f1-a414-b6df50bbe569-kube-api-access-l842s\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.263564 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-config-data\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.263589 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.297200 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l9xrj"] Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.298737 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.302320 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.303716 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.317050 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9xrj"] Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367500 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/711b3a62-e6dd-45f1-a414-b6df50bbe569-logs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-public-tls-certs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwcc\" (UniqueName: \"kubernetes.io/projected/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-kube-api-access-ggwcc\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-scripts\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367812 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l842s\" (UniqueName: \"kubernetes.io/projected/711b3a62-e6dd-45f1-a414-b6df50bbe569-kube-api-access-l842s\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367890 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367922 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-config-data\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.367951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.368046 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-internal-tls-certs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.368106 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-config-data\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.368595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/711b3a62-e6dd-45f1-a414-b6df50bbe569-logs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.373155 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.375250 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-public-tls-certs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.375989 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-config-data\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.376199 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-internal-tls-certs\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.406800 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l842s\" (UniqueName: \"kubernetes.io/projected/711b3a62-e6dd-45f1-a414-b6df50bbe569-kube-api-access-l842s\") pod \"nova-api-0\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.424190 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.469989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-config-data\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.470077 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwcc\" (UniqueName: \"kubernetes.io/projected/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-kube-api-access-ggwcc\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.470128 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-scripts\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.470207 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.476200 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.476815 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-config-data\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.487781 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-scripts\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.489037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwcc\" (UniqueName: \"kubernetes.io/projected/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-kube-api-access-ggwcc\") pod \"nova-cell1-cell-mapping-l9xrj\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:54 crc kubenswrapper[4705]: I0124 08:04:54.619985 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:04:55 crc kubenswrapper[4705]: I0124 08:04:55.079095 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:04:55 crc kubenswrapper[4705]: I0124 08:04:55.088811 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerStarted","Data":"828f0409a2667a6f31b69c64a012453123202846e0feedd2a9d3751f2a5d0df1"} Jan 24 08:04:55 crc kubenswrapper[4705]: I0124 08:04:55.483534 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9xrj"] Jan 24 08:04:55 crc kubenswrapper[4705]: I0124 08:04:55.593197 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9594fbd-0ca9-4624-8fdd-d5e784827a16" path="/var/lib/kubelet/pods/b9594fbd-0ca9-4624-8fdd-d5e784827a16/volumes" Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.099054 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9xrj" event={"ID":"43f6fcc7-6bf4-4313-95f7-40e61821d3f1","Type":"ContainerStarted","Data":"bc5221212fcca4b95bc82323a40309f34e7018101e7a9458edeba5e9de1d7729"} Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.099127 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9xrj" event={"ID":"43f6fcc7-6bf4-4313-95f7-40e61821d3f1","Type":"ContainerStarted","Data":"8f727459c4229653edbb4d09f0ca25ac742d9cb53d4b13e3972512a2e9cf46c9"} Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.105956 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerStarted","Data":"c7991ce896e8845d6895bcceeb6dc54cb4bd749d63d0c2fd88c63a3096c657f4"} Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.106330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerStarted","Data":"a1dfeca9415ce440fa6bf6a5fcefee47a1fb74d7b847e8656d7f54684348201e"} Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.108591 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"711b3a62-e6dd-45f1-a414-b6df50bbe569","Type":"ContainerStarted","Data":"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f"} Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.108643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"711b3a62-e6dd-45f1-a414-b6df50bbe569","Type":"ContainerStarted","Data":"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72"} Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.108661 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"711b3a62-e6dd-45f1-a414-b6df50bbe569","Type":"ContainerStarted","Data":"75656d4c6dede070f9b5e2f527f201aadc0e7dd186021ed377cdcfbd1605bc90"} Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.122698 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l9xrj" podStartSLOduration=2.122677947 podStartE2EDuration="2.122677947s" podCreationTimestamp="2026-01-24 08:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:56.120206635 +0000 UTC m=+1434.840079923" watchObservedRunningTime="2026-01-24 08:04:56.122677947 +0000 UTC m=+1434.842551235" Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.145794 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.145772025 podStartE2EDuration="2.145772025s" podCreationTimestamp="2026-01-24 08:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:04:56.144676238 +0000 UTC m=+1434.864549546" watchObservedRunningTime="2026-01-24 08:04:56.145772025 +0000 UTC m=+1434.865645313" Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.499232 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.594179 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-zw4j2"] Jan 24 08:04:56 crc kubenswrapper[4705]: I0124 08:04:56.594601 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" podUID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerName="dnsmasq-dns" containerID="cri-o://23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d" gracePeriod=10 Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.095009 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.103023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-sb\") pod \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.103167 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-svc\") pod \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.103203 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-swift-storage-0\") pod \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.103230 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-config\") pod \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.103285 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-nb\") pod \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.103410 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd8k4\" (UniqueName: \"kubernetes.io/projected/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-kube-api-access-hd8k4\") pod \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\" (UID: \"9f9b02e6-c95b-4670-96d0-9b36e96eb14c\") " Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.110398 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-kube-api-access-hd8k4" (OuterVolumeSpecName: "kube-api-access-hd8k4") pod "9f9b02e6-c95b-4670-96d0-9b36e96eb14c" (UID: "9f9b02e6-c95b-4670-96d0-9b36e96eb14c"). InnerVolumeSpecName "kube-api-access-hd8k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.141474 4705 generic.go:334] "Generic (PLEG): container finished" podID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerID="23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d" exitCode=0 Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.142797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" event={"ID":"9f9b02e6-c95b-4670-96d0-9b36e96eb14c","Type":"ContainerDied","Data":"23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d"} Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.142888 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" event={"ID":"9f9b02e6-c95b-4670-96d0-9b36e96eb14c","Type":"ContainerDied","Data":"32fbf42bb0b489bb52a550f66975bf0720e85e08f93b9d032fb661280475441b"} Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.142909 4705 scope.go:117] "RemoveContainer" containerID="23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.143106 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-zw4j2" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.204981 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd8k4\" (UniqueName: \"kubernetes.io/projected/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-kube-api-access-hd8k4\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.227436 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f9b02e6-c95b-4670-96d0-9b36e96eb14c" (UID: "9f9b02e6-c95b-4670-96d0-9b36e96eb14c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.230391 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f9b02e6-c95b-4670-96d0-9b36e96eb14c" (UID: "9f9b02e6-c95b-4670-96d0-9b36e96eb14c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.231719 4705 scope.go:117] "RemoveContainer" containerID="99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.248718 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f9b02e6-c95b-4670-96d0-9b36e96eb14c" (UID: "9f9b02e6-c95b-4670-96d0-9b36e96eb14c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.259684 4705 scope.go:117] "RemoveContainer" containerID="23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d" Jan 24 08:04:57 crc kubenswrapper[4705]: E0124 08:04:57.260238 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d\": container with ID starting with 23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d not found: ID does not exist" containerID="23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.260319 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d"} err="failed to get container status \"23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d\": rpc error: code = NotFound desc = could not find container \"23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d\": container with ID starting with 23535d85a93298272869c192bf53fcd2204693c3bfebc009ae7696a46111307d not found: ID does not exist" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.260344 4705 scope.go:117] "RemoveContainer" containerID="99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6" Jan 24 08:04:57 crc kubenswrapper[4705]: E0124 08:04:57.260945 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6\": container with ID starting with 99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6 not found: ID does not exist" containerID="99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.260975 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6"} err="failed to get container status \"99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6\": rpc error: code = NotFound desc = could not find container \"99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6\": container with ID starting with 99fd80ce3d0cbc79a4ea86f3cc2afeffe72a2906f7de0becf45cbf098e3e4ba6 not found: ID does not exist" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.260965 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f9b02e6-c95b-4670-96d0-9b36e96eb14c" (UID: "9f9b02e6-c95b-4670-96d0-9b36e96eb14c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.263419 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-config" (OuterVolumeSpecName: "config") pod "9f9b02e6-c95b-4670-96d0-9b36e96eb14c" (UID: "9f9b02e6-c95b-4670-96d0-9b36e96eb14c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.309247 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.309287 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.309300 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.309311 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.309323 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9b02e6-c95b-4670-96d0-9b36e96eb14c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.536980 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-zw4j2"] Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.546504 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-zw4j2"] Jan 24 08:04:57 crc kubenswrapper[4705]: I0124 08:04:57.588497 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" path="/var/lib/kubelet/pods/9f9b02e6-c95b-4670-96d0-9b36e96eb14c/volumes" Jan 24 08:04:58 crc kubenswrapper[4705]: I0124 08:04:58.155495 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerStarted","Data":"370c21714590a8fc27d58b1d7ae90058a81f95823e6ae8f18182f185bf1c0b1c"} Jan 24 08:04:58 crc kubenswrapper[4705]: I0124 08:04:58.156488 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:04:58 crc kubenswrapper[4705]: I0124 08:04:58.181245 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.290400303 podStartE2EDuration="7.181227614s" podCreationTimestamp="2026-01-24 08:04:51 +0000 UTC" firstStartedPulling="2026-01-24 08:04:53.40935383 +0000 UTC m=+1432.129227118" lastFinishedPulling="2026-01-24 08:04:57.300181141 +0000 UTC m=+1436.020054429" observedRunningTime="2026-01-24 08:04:58.177737707 +0000 UTC m=+1436.897610995" watchObservedRunningTime="2026-01-24 08:04:58.181227614 +0000 UTC m=+1436.901100902" Jan 24 08:05:01 crc kubenswrapper[4705]: I0124 08:05:01.203281 4705 generic.go:334] "Generic (PLEG): container finished" podID="43f6fcc7-6bf4-4313-95f7-40e61821d3f1" containerID="bc5221212fcca4b95bc82323a40309f34e7018101e7a9458edeba5e9de1d7729" exitCode=0 Jan 24 08:05:01 crc kubenswrapper[4705]: I0124 08:05:01.203362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9xrj" event={"ID":"43f6fcc7-6bf4-4313-95f7-40e61821d3f1","Type":"ContainerDied","Data":"bc5221212fcca4b95bc82323a40309f34e7018101e7a9458edeba5e9de1d7729"} Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.619153 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.676598 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-combined-ca-bundle\") pod \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.676757 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-config-data\") pod \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.676777 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggwcc\" (UniqueName: \"kubernetes.io/projected/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-kube-api-access-ggwcc\") pod \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.676868 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-scripts\") pod \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\" (UID: \"43f6fcc7-6bf4-4313-95f7-40e61821d3f1\") " Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.683033 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-kube-api-access-ggwcc" (OuterVolumeSpecName: "kube-api-access-ggwcc") pod "43f6fcc7-6bf4-4313-95f7-40e61821d3f1" (UID: "43f6fcc7-6bf4-4313-95f7-40e61821d3f1"). InnerVolumeSpecName "kube-api-access-ggwcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.683504 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-scripts" (OuterVolumeSpecName: "scripts") pod "43f6fcc7-6bf4-4313-95f7-40e61821d3f1" (UID: "43f6fcc7-6bf4-4313-95f7-40e61821d3f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.705188 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43f6fcc7-6bf4-4313-95f7-40e61821d3f1" (UID: "43f6fcc7-6bf4-4313-95f7-40e61821d3f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.708941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-config-data" (OuterVolumeSpecName: "config-data") pod "43f6fcc7-6bf4-4313-95f7-40e61821d3f1" (UID: "43f6fcc7-6bf4-4313-95f7-40e61821d3f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.778193 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.778229 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.778240 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:02 crc kubenswrapper[4705]: I0124 08:05:02.778249 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggwcc\" (UniqueName: \"kubernetes.io/projected/43f6fcc7-6bf4-4313-95f7-40e61821d3f1-kube-api-access-ggwcc\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.235932 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.236474 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.249987 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.291200 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9xrj" event={"ID":"43f6fcc7-6bf4-4313-95f7-40e61821d3f1","Type":"ContainerDied","Data":"8f727459c4229653edbb4d09f0ca25ac742d9cb53d4b13e3972512a2e9cf46c9"} Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.291252 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f727459c4229653edbb4d09f0ca25ac742d9cb53d4b13e3972512a2e9cf46c9" Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.291230 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9xrj" Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.296490 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.493537 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.500004 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-log" containerID="cri-o://debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72" gracePeriod=30 Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.500317 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-api" containerID="cri-o://e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f" gracePeriod=30 Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.507781 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.508022 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3d23c7a5-3169-4f2c-962a-d5454cf0ae93" containerName="nova-scheduler-scheduler" containerID="cri-o://45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" gracePeriod=30 Jan 24 08:05:03 crc kubenswrapper[4705]: I0124 08:05:03.525271 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.140036 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.213504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/711b3a62-e6dd-45f1-a414-b6df50bbe569-logs\") pod \"711b3a62-e6dd-45f1-a414-b6df50bbe569\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.213620 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-internal-tls-certs\") pod \"711b3a62-e6dd-45f1-a414-b6df50bbe569\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.213655 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-public-tls-certs\") pod \"711b3a62-e6dd-45f1-a414-b6df50bbe569\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.213725 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l842s\" (UniqueName: \"kubernetes.io/projected/711b3a62-e6dd-45f1-a414-b6df50bbe569-kube-api-access-l842s\") pod \"711b3a62-e6dd-45f1-a414-b6df50bbe569\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.213784 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-config-data\") pod \"711b3a62-e6dd-45f1-a414-b6df50bbe569\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.213811 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-combined-ca-bundle\") pod \"711b3a62-e6dd-45f1-a414-b6df50bbe569\" (UID: \"711b3a62-e6dd-45f1-a414-b6df50bbe569\") " Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.213878 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/711b3a62-e6dd-45f1-a414-b6df50bbe569-logs" (OuterVolumeSpecName: "logs") pod "711b3a62-e6dd-45f1-a414-b6df50bbe569" (UID: "711b3a62-e6dd-45f1-a414-b6df50bbe569"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.214275 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/711b3a62-e6dd-45f1-a414-b6df50bbe569-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.223029 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711b3a62-e6dd-45f1-a414-b6df50bbe569-kube-api-access-l842s" (OuterVolumeSpecName: "kube-api-access-l842s") pod "711b3a62-e6dd-45f1-a414-b6df50bbe569" (UID: "711b3a62-e6dd-45f1-a414-b6df50bbe569"). InnerVolumeSpecName "kube-api-access-l842s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.277232 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-config-data" (OuterVolumeSpecName: "config-data") pod "711b3a62-e6dd-45f1-a414-b6df50bbe569" (UID: "711b3a62-e6dd-45f1-a414-b6df50bbe569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.279090 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "711b3a62-e6dd-45f1-a414-b6df50bbe569" (UID: "711b3a62-e6dd-45f1-a414-b6df50bbe569"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.279104 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "711b3a62-e6dd-45f1-a414-b6df50bbe569" (UID: "711b3a62-e6dd-45f1-a414-b6df50bbe569"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.288562 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "711b3a62-e6dd-45f1-a414-b6df50bbe569" (UID: "711b3a62-e6dd-45f1-a414-b6df50bbe569"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.301666 4705 generic.go:334] "Generic (PLEG): container finished" podID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerID="e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f" exitCode=0 Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.301698 4705 generic.go:334] "Generic (PLEG): container finished" podID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerID="debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72" exitCode=143 Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.301747 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.301764 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"711b3a62-e6dd-45f1-a414-b6df50bbe569","Type":"ContainerDied","Data":"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f"} Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.301870 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"711b3a62-e6dd-45f1-a414-b6df50bbe569","Type":"ContainerDied","Data":"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72"} Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.301889 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"711b3a62-e6dd-45f1-a414-b6df50bbe569","Type":"ContainerDied","Data":"75656d4c6dede070f9b5e2f527f201aadc0e7dd186021ed377cdcfbd1605bc90"} Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.301919 4705 scope.go:117] "RemoveContainer" containerID="e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.315971 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.316003 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.316017 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l842s\" (UniqueName: \"kubernetes.io/projected/711b3a62-e6dd-45f1-a414-b6df50bbe569-kube-api-access-l842s\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.316030 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.316041 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b3a62-e6dd-45f1-a414-b6df50bbe569-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.349031 4705 scope.go:117] "RemoveContainer" containerID="debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.374941 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.380315 4705 scope.go:117] "RemoveContainer" containerID="e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.380946 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f\": container with ID starting with e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f not found: ID does not exist" containerID="e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.380998 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f"} err="failed to get container status \"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f\": rpc error: code = NotFound desc = could not find container \"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f\": container with ID starting with e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f not found: ID does not exist" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.381028 4705 scope.go:117] "RemoveContainer" containerID="debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.381352 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72\": container with ID starting with debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72 not found: ID does not exist" containerID="debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.381382 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72"} err="failed to get container status \"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72\": rpc error: code = NotFound desc = could not find container \"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72\": container with ID starting with debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72 not found: ID does not exist" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.381401 4705 scope.go:117] "RemoveContainer" containerID="e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.383143 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f"} err="failed to get container status \"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f\": rpc error: code = NotFound desc = could not find container \"e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f\": container with ID starting with e9632ceb2da26c226a9349659fd9c529c8a9a06322aa383fd3f58703a664cf1f not found: ID does not exist" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.383171 4705 scope.go:117] "RemoveContainer" containerID="debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.385635 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72"} err="failed to get container status \"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72\": rpc error: code = NotFound desc = could not find container \"debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72\": container with ID starting with debb7faa97063320d52b54473882ac07f52d99f40aa37b3d14c681f8a392ca72 not found: ID does not exist" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.390460 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402002 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.402464 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerName="dnsmasq-dns" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402489 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerName="dnsmasq-dns" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.402525 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-log" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402533 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-log" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.402546 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerName="init" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402551 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerName="init" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.402563 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-api" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402568 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-api" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.402584 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f6fcc7-6bf4-4313-95f7-40e61821d3f1" containerName="nova-manage" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402589 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f6fcc7-6bf4-4313-95f7-40e61821d3f1" containerName="nova-manage" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402769 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-log" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402790 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9b02e6-c95b-4670-96d0-9b36e96eb14c" containerName="dnsmasq-dns" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402802 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f6fcc7-6bf4-4313-95f7-40e61821d3f1" containerName="nova-manage" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.402812 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" containerName="nova-api-api" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.403805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.406119 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.406712 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.407251 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.413364 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.417279 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-config-data\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.417359 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gk5z\" (UniqueName: \"kubernetes.io/projected/bdffe46c-ac47-422d-aec3-896fa1575ca7-kube-api-access-8gk5z\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.417395 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdffe46c-ac47-422d-aec3-896fa1575ca7-logs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.417427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-public-tls-certs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.417445 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.417523 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.516875 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.518443 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.518843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gk5z\" (UniqueName: \"kubernetes.io/projected/bdffe46c-ac47-422d-aec3-896fa1575ca7-kube-api-access-8gk5z\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.518964 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdffe46c-ac47-422d-aec3-896fa1575ca7-logs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.519078 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-public-tls-certs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.519164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.519309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.519410 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-config-data\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.519976 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 08:05:04 crc kubenswrapper[4705]: E0124 08:05:04.520014 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3d23c7a5-3169-4f2c-962a-d5454cf0ae93" containerName="nova-scheduler-scheduler" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.520521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdffe46c-ac47-422d-aec3-896fa1575ca7-logs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.523952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-public-tls-certs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.524506 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.526027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.526313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdffe46c-ac47-422d-aec3-896fa1575ca7-config-data\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.536134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gk5z\" (UniqueName: \"kubernetes.io/projected/bdffe46c-ac47-422d-aec3-896fa1575ca7-kube-api-access-8gk5z\") pod \"nova-api-0\" (UID: \"bdffe46c-ac47-422d-aec3-896fa1575ca7\") " pod="openstack/nova-api-0" Jan 24 08:05:04 crc kubenswrapper[4705]: I0124 08:05:04.734267 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 08:05:05 crc kubenswrapper[4705]: I0124 08:05:05.156920 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 08:05:05 crc kubenswrapper[4705]: W0124 08:05:05.163626 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdffe46c_ac47_422d_aec3_896fa1575ca7.slice/crio-6db8b001fbc29497185d235adfaab7e7d63efe6377a3ffbd4d0f969fc689eed4 WatchSource:0}: Error finding container 6db8b001fbc29497185d235adfaab7e7d63efe6377a3ffbd4d0f969fc689eed4: Status 404 returned error can't find the container with id 6db8b001fbc29497185d235adfaab7e7d63efe6377a3ffbd4d0f969fc689eed4 Jan 24 08:05:05 crc kubenswrapper[4705]: I0124 08:05:05.312898 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdffe46c-ac47-422d-aec3-896fa1575ca7","Type":"ContainerStarted","Data":"6db8b001fbc29497185d235adfaab7e7d63efe6377a3ffbd4d0f969fc689eed4"} Jan 24 08:05:05 crc kubenswrapper[4705]: I0124 08:05:05.314044 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-log" containerID="cri-o://219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835" gracePeriod=30 Jan 24 08:05:05 crc kubenswrapper[4705]: I0124 08:05:05.314152 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-metadata" containerID="cri-o://8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972" gracePeriod=30 Jan 24 08:05:05 crc kubenswrapper[4705]: I0124 08:05:05.587810 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="711b3a62-e6dd-45f1-a414-b6df50bbe569" path="/var/lib/kubelet/pods/711b3a62-e6dd-45f1-a414-b6df50bbe569/volumes" Jan 24 08:05:06 crc kubenswrapper[4705]: I0124 08:05:06.325015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdffe46c-ac47-422d-aec3-896fa1575ca7","Type":"ContainerStarted","Data":"a857e5609c01bf8c66c3f7d8d0ff07bc9f15ff8263515ee5601cba44a715270e"} Jan 24 08:05:06 crc kubenswrapper[4705]: I0124 08:05:06.325365 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdffe46c-ac47-422d-aec3-896fa1575ca7","Type":"ContainerStarted","Data":"3f7733f86f2559b88c95f1a3957c7d0a55ff0819d0332d6887aa0d48482bc39f"} Jan 24 08:05:06 crc kubenswrapper[4705]: I0124 08:05:06.328785 4705 generic.go:334] "Generic (PLEG): container finished" podID="44f58822-0740-4534-a8cb-79bf85a8c431" containerID="219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835" exitCode=143 Jan 24 08:05:06 crc kubenswrapper[4705]: I0124 08:05:06.328844 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44f58822-0740-4534-a8cb-79bf85a8c431","Type":"ContainerDied","Data":"219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835"} Jan 24 08:05:06 crc kubenswrapper[4705]: I0124 08:05:06.347113 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.347096875 podStartE2EDuration="2.347096875s" podCreationTimestamp="2026-01-24 08:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:05:06.345486484 +0000 UTC m=+1445.065359782" watchObservedRunningTime="2026-01-24 08:05:06.347096875 +0000 UTC m=+1445.066970163" Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.071929 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.071997 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.072043 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.072815 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"775d2ed06dfb0c63c8b346ce0d06f95edde444c60e383f78b6d4ebabd731e08f"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.072905 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://775d2ed06dfb0c63c8b346ce0d06f95edde444c60e383f78b6d4ebabd731e08f" gracePeriod=600 Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.340634 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="775d2ed06dfb0c63c8b346ce0d06f95edde444c60e383f78b6d4ebabd731e08f" exitCode=0 Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.340703 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"775d2ed06dfb0c63c8b346ce0d06f95edde444c60e383f78b6d4ebabd731e08f"} Jan 24 08:05:07 crc kubenswrapper[4705]: I0124 08:05:07.341457 4705 scope.go:117] "RemoveContainer" containerID="853514deca6f38cd0a77ff6aa66eff5f7cb660b73f8271ebb43497a216af6f05" Jan 24 08:05:08 crc kubenswrapper[4705]: I0124 08:05:08.353177 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7"} Jan 24 08:05:08 crc kubenswrapper[4705]: I0124 08:05:08.445556 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:38180->10.217.0.212:8775: read: connection reset by peer" Jan 24 08:05:08 crc kubenswrapper[4705]: I0124 08:05:08.446056 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:38166->10.217.0.212:8775: read: connection reset by peer" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.095861 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.108250 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.180237 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrb84\" (UniqueName: \"kubernetes.io/projected/44f58822-0740-4534-a8cb-79bf85a8c431-kube-api-access-hrb84\") pod \"44f58822-0740-4534-a8cb-79bf85a8c431\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.180377 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-config-data\") pod \"44f58822-0740-4534-a8cb-79bf85a8c431\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.180516 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f58822-0740-4534-a8cb-79bf85a8c431-logs\") pod \"44f58822-0740-4534-a8cb-79bf85a8c431\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.180584 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-nova-metadata-tls-certs\") pod \"44f58822-0740-4534-a8cb-79bf85a8c431\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.180652 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-combined-ca-bundle\") pod \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.180769 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-config-data\") pod \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.180938 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqwvq\" (UniqueName: \"kubernetes.io/projected/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-kube-api-access-vqwvq\") pod \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\" (UID: \"3d23c7a5-3169-4f2c-962a-d5454cf0ae93\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.181024 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-combined-ca-bundle\") pod \"44f58822-0740-4534-a8cb-79bf85a8c431\" (UID: \"44f58822-0740-4534-a8cb-79bf85a8c431\") " Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.181388 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44f58822-0740-4534-a8cb-79bf85a8c431-logs" (OuterVolumeSpecName: "logs") pod "44f58822-0740-4534-a8cb-79bf85a8c431" (UID: "44f58822-0740-4534-a8cb-79bf85a8c431"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.182179 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f58822-0740-4534-a8cb-79bf85a8c431-logs\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.200319 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f58822-0740-4534-a8cb-79bf85a8c431-kube-api-access-hrb84" (OuterVolumeSpecName: "kube-api-access-hrb84") pod "44f58822-0740-4534-a8cb-79bf85a8c431" (UID: "44f58822-0740-4534-a8cb-79bf85a8c431"). InnerVolumeSpecName "kube-api-access-hrb84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.202077 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-kube-api-access-vqwvq" (OuterVolumeSpecName: "kube-api-access-vqwvq") pod "3d23c7a5-3169-4f2c-962a-d5454cf0ae93" (UID: "3d23c7a5-3169-4f2c-962a-d5454cf0ae93"). InnerVolumeSpecName "kube-api-access-vqwvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.215344 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44f58822-0740-4534-a8cb-79bf85a8c431" (UID: "44f58822-0740-4534-a8cb-79bf85a8c431"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.224870 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d23c7a5-3169-4f2c-962a-d5454cf0ae93" (UID: "3d23c7a5-3169-4f2c-962a-d5454cf0ae93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.232022 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-config-data" (OuterVolumeSpecName: "config-data") pod "3d23c7a5-3169-4f2c-962a-d5454cf0ae93" (UID: "3d23c7a5-3169-4f2c-962a-d5454cf0ae93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.232482 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-config-data" (OuterVolumeSpecName: "config-data") pod "44f58822-0740-4534-a8cb-79bf85a8c431" (UID: "44f58822-0740-4534-a8cb-79bf85a8c431"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.249080 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "44f58822-0740-4534-a8cb-79bf85a8c431" (UID: "44f58822-0740-4534-a8cb-79bf85a8c431"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.283738 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.283775 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.283786 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqwvq\" (UniqueName: \"kubernetes.io/projected/3d23c7a5-3169-4f2c-962a-d5454cf0ae93-kube-api-access-vqwvq\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.283801 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.283813 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrb84\" (UniqueName: \"kubernetes.io/projected/44f58822-0740-4534-a8cb-79bf85a8c431-kube-api-access-hrb84\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.283841 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.283849 4705 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44f58822-0740-4534-a8cb-79bf85a8c431-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.388060 4705 generic.go:334] "Generic (PLEG): container finished" podID="3d23c7a5-3169-4f2c-962a-d5454cf0ae93" containerID="45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" exitCode=0 Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.388146 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d23c7a5-3169-4f2c-962a-d5454cf0ae93","Type":"ContainerDied","Data":"45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3"} Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.388185 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d23c7a5-3169-4f2c-962a-d5454cf0ae93","Type":"ContainerDied","Data":"6876910ce4af3c06b63f5b8d7208955df477bd5e89160b306a3d11c80b94dfd8"} Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.388206 4705 scope.go:117] "RemoveContainer" containerID="45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.388357 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.402452 4705 generic.go:334] "Generic (PLEG): container finished" podID="44f58822-0740-4534-a8cb-79bf85a8c431" containerID="8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972" exitCode=0 Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.403507 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.405313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44f58822-0740-4534-a8cb-79bf85a8c431","Type":"ContainerDied","Data":"8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972"} Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.405362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44f58822-0740-4534-a8cb-79bf85a8c431","Type":"ContainerDied","Data":"3e8fbf62c457e17b14238fbf88d27f9652ad194c54055964d1b8f73de2e8fa50"} Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.449064 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.473895 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.483885 4705 scope.go:117] "RemoveContainer" containerID="45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" Jan 24 08:05:09 crc kubenswrapper[4705]: E0124 08:05:09.485176 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3\": container with ID starting with 45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3 not found: ID does not exist" containerID="45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.485229 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3"} err="failed to get container status \"45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3\": rpc error: code = NotFound desc = could not find container \"45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3\": container with ID starting with 45aacd4af2eb68230682c8a2f364ab770e59b9e8d6ca34d5b9b485d25b4f30d3 not found: ID does not exist" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.485258 4705 scope.go:117] "RemoveContainer" containerID="8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.509032 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.550590 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.575061 4705 scope.go:117] "RemoveContainer" containerID="219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.638623 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d23c7a5-3169-4f2c-962a-d5454cf0ae93" path="/var/lib/kubelet/pods/3d23c7a5-3169-4f2c-962a-d5454cf0ae93/volumes" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.640458 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" path="/var/lib/kubelet/pods/44f58822-0740-4534-a8cb-79bf85a8c431/volumes" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.641453 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: E0124 08:05:09.641768 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d23c7a5-3169-4f2c-962a-d5454cf0ae93" containerName="nova-scheduler-scheduler" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.641783 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d23c7a5-3169-4f2c-962a-d5454cf0ae93" containerName="nova-scheduler-scheduler" Jan 24 08:05:09 crc kubenswrapper[4705]: E0124 08:05:09.641805 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-metadata" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.641815 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-metadata" Jan 24 08:05:09 crc kubenswrapper[4705]: E0124 08:05:09.641867 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-log" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.641873 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-log" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.642087 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-log" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.642098 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d23c7a5-3169-4f2c-962a-d5454cf0ae93" containerName="nova-scheduler-scheduler" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.642111 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f58822-0740-4534-a8cb-79bf85a8c431" containerName="nova-metadata-metadata" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.642755 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.643350 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.647638 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.652798 4705 scope.go:117] "RemoveContainer" containerID="8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.653910 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.655776 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: E0124 08:05:09.656344 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972\": container with ID starting with 8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972 not found: ID does not exist" containerID="8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.656404 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972"} err="failed to get container status \"8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972\": rpc error: code = NotFound desc = could not find container \"8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972\": container with ID starting with 8199232f73cba6877b999d586eef79c9215b89c027c482eddfdf7a86c71b3972 not found: ID does not exist" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.656440 4705 scope.go:117] "RemoveContainer" containerID="219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835" Jan 24 08:05:09 crc kubenswrapper[4705]: E0124 08:05:09.656868 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835\": container with ID starting with 219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835 not found: ID does not exist" containerID="219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.656894 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835"} err="failed to get container status \"219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835\": rpc error: code = NotFound desc = could not find container \"219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835\": container with ID starting with 219d391d7f223dcd4d956e067e1dbeed346958f7bdb86b8967854e316dad4835 not found: ID does not exist" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.658124 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.659771 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.664708 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.830632 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwxh8\" (UniqueName: \"kubernetes.io/projected/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-kube-api-access-lwxh8\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.831384 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-config-data\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.831434 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcacec9-3f9e-488c-846b-708af727b753-logs\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.831464 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.831483 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4vc\" (UniqueName: \"kubernetes.io/projected/fbcacec9-3f9e-488c-846b-708af727b753-kube-api-access-kn4vc\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.831513 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.831597 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.831621 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-config-data\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4vc\" (UniqueName: \"kubernetes.io/projected/fbcacec9-3f9e-488c-846b-708af727b753-kube-api-access-kn4vc\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933209 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933356 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933389 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-config-data\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933437 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwxh8\" (UniqueName: \"kubernetes.io/projected/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-kube-api-access-lwxh8\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933468 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-config-data\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933504 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcacec9-3f9e-488c-846b-708af727b753-logs\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.933547 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.934180 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcacec9-3f9e-488c-846b-708af727b753-logs\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.937572 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-config-data\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.937599 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.937802 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.938138 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-config-data\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.939555 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbcacec9-3f9e-488c-846b-708af727b753-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.951344 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwxh8\" (UniqueName: \"kubernetes.io/projected/ba8d3653-1ade-4d27-a7fa-06e616ffe7f2-kube-api-access-lwxh8\") pod \"nova-scheduler-0\" (UID: \"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2\") " pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.953403 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4vc\" (UniqueName: \"kubernetes.io/projected/fbcacec9-3f9e-488c-846b-708af727b753-kube-api-access-kn4vc\") pod \"nova-metadata-0\" (UID: \"fbcacec9-3f9e-488c-846b-708af727b753\") " pod="openstack/nova-metadata-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.976223 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 08:05:09 crc kubenswrapper[4705]: I0124 08:05:09.993890 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 08:05:10 crc kubenswrapper[4705]: W0124 08:05:10.421711 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba8d3653_1ade_4d27_a7fa_06e616ffe7f2.slice/crio-eb8cb26771ef7ede6a03f42fdeb8d42647f6bc7f5422c4c47492112e2cb46520 WatchSource:0}: Error finding container eb8cb26771ef7ede6a03f42fdeb8d42647f6bc7f5422c4c47492112e2cb46520: Status 404 returned error can't find the container with id eb8cb26771ef7ede6a03f42fdeb8d42647f6bc7f5422c4c47492112e2cb46520 Jan 24 08:05:10 crc kubenswrapper[4705]: I0124 08:05:10.428766 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 08:05:10 crc kubenswrapper[4705]: I0124 08:05:10.490060 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 08:05:11 crc kubenswrapper[4705]: I0124 08:05:11.435610 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2","Type":"ContainerStarted","Data":"7d37ed2b9ee90261b67bac7319aa0bb1acb4ce7d16801d176c6ef600ef093fa1"} Jan 24 08:05:11 crc kubenswrapper[4705]: I0124 08:05:11.435775 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba8d3653-1ade-4d27-a7fa-06e616ffe7f2","Type":"ContainerStarted","Data":"eb8cb26771ef7ede6a03f42fdeb8d42647f6bc7f5422c4c47492112e2cb46520"} Jan 24 08:05:11 crc kubenswrapper[4705]: I0124 08:05:11.438625 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbcacec9-3f9e-488c-846b-708af727b753","Type":"ContainerStarted","Data":"8c5449b7e6f3a714280da91bd3bbc4049291bcaee6a679926c34ad509e55ddfe"} Jan 24 08:05:11 crc kubenswrapper[4705]: I0124 08:05:11.438675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbcacec9-3f9e-488c-846b-708af727b753","Type":"ContainerStarted","Data":"e2541e0706362659c9ccabdf031c00de99f4792cb826b8304fd12a2c8e4a6b8f"} Jan 24 08:05:11 crc kubenswrapper[4705]: I0124 08:05:11.438687 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbcacec9-3f9e-488c-846b-708af727b753","Type":"ContainerStarted","Data":"561d588f371dd71a04bfdfc20dc5f864026b509375073699ef5a7784623a7ef7"} Jan 24 08:05:11 crc kubenswrapper[4705]: I0124 08:05:11.459967 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.459945205 podStartE2EDuration="2.459945205s" podCreationTimestamp="2026-01-24 08:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:05:11.454711244 +0000 UTC m=+1450.174584542" watchObservedRunningTime="2026-01-24 08:05:11.459945205 +0000 UTC m=+1450.179818493" Jan 24 08:05:11 crc kubenswrapper[4705]: I0124 08:05:11.476322 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.476302924 podStartE2EDuration="2.476302924s" podCreationTimestamp="2026-01-24 08:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:05:11.471267978 +0000 UTC m=+1450.191141286" watchObservedRunningTime="2026-01-24 08:05:11.476302924 +0000 UTC m=+1450.196176212" Jan 24 08:05:14 crc kubenswrapper[4705]: I0124 08:05:14.735536 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 08:05:14 crc kubenswrapper[4705]: I0124 08:05:14.736101 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 08:05:14 crc kubenswrapper[4705]: I0124 08:05:14.977755 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 08:05:14 crc kubenswrapper[4705]: I0124 08:05:14.995044 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 08:05:14 crc kubenswrapper[4705]: I0124 08:05:14.995525 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 08:05:15 crc kubenswrapper[4705]: I0124 08:05:15.747084 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdffe46c-ac47-422d-aec3-896fa1575ca7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.217:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 08:05:15 crc kubenswrapper[4705]: I0124 08:05:15.747084 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdffe46c-ac47-422d-aec3-896fa1575ca7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.217:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 08:05:19 crc kubenswrapper[4705]: I0124 08:05:19.977833 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 08:05:19 crc kubenswrapper[4705]: I0124 08:05:19.994950 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 08:05:19 crc kubenswrapper[4705]: I0124 08:05:19.995258 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 08:05:20 crc kubenswrapper[4705]: I0124 08:05:20.008004 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 08:05:20 crc kubenswrapper[4705]: I0124 08:05:20.557888 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 08:05:21 crc kubenswrapper[4705]: I0124 08:05:21.009022 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbcacec9-3f9e-488c-846b-708af727b753" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 08:05:21 crc kubenswrapper[4705]: I0124 08:05:21.009022 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbcacec9-3f9e-488c-846b-708af727b753" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 08:05:22 crc kubenswrapper[4705]: I0124 08:05:22.329624 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 08:05:24 crc kubenswrapper[4705]: I0124 08:05:24.756305 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 08:05:24 crc kubenswrapper[4705]: I0124 08:05:24.757005 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 08:05:24 crc kubenswrapper[4705]: I0124 08:05:24.802581 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 08:05:24 crc kubenswrapper[4705]: I0124 08:05:24.818362 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 08:05:25 crc kubenswrapper[4705]: I0124 08:05:25.587650 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 08:05:25 crc kubenswrapper[4705]: I0124 08:05:25.597160 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 08:05:30 crc kubenswrapper[4705]: I0124 08:05:30.003329 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 08:05:30 crc kubenswrapper[4705]: I0124 08:05:30.010656 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 08:05:30 crc kubenswrapper[4705]: I0124 08:05:30.012814 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 08:05:30 crc kubenswrapper[4705]: I0124 08:05:30.625937 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 08:05:38 crc kubenswrapper[4705]: I0124 08:05:38.912844 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 08:05:40 crc kubenswrapper[4705]: I0124 08:05:40.732927 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 08:05:44 crc kubenswrapper[4705]: I0124 08:05:44.279934 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerName="rabbitmq" containerID="cri-o://ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980" gracePeriod=604795 Jan 24 08:05:45 crc kubenswrapper[4705]: I0124 08:05:45.656643 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerName="rabbitmq" containerID="cri-o://025cbcf5ea07007fc61cf1337223083ce6bad8d6533d249f649ca891701fdc71" gracePeriod=604796 Jan 24 08:05:49 crc kubenswrapper[4705]: I0124 08:05:49.018016 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 24 08:05:49 crc kubenswrapper[4705]: I0124 08:05:49.554030 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 24 08:05:50 crc kubenswrapper[4705]: I0124 08:05:50.889041 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.303809 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-erlang-cookie-secret\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.303898 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-tls\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.304033 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-server-conf\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.304091 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.304904 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-pod-info\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.304935 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-confd\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.304968 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-erlang-cookie\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.305107 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mczsh\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-kube-api-access-mczsh\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.305166 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-plugins\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.305201 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-config-data\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.305248 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-plugins-conf\") pod \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\" (UID: \"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57\") " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.306941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.318076 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.336941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.337044 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-kube-api-access-mczsh" (OuterVolumeSpecName: "kube-api-access-mczsh") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "kube-api-access-mczsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.339357 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-pod-info" (OuterVolumeSpecName: "pod-info") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.339534 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.350222 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.366755 4705 generic.go:334] "Generic (PLEG): container finished" podID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerID="ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980" exitCode=0 Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.366859 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57","Type":"ContainerDied","Data":"ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980"} Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.366890 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6466e4f6-65ac-4f90-99a4-6cf7bc77bc57","Type":"ContainerDied","Data":"330f39a76d7682b6bd6a6d0cc5a5230ffb615544da05699d23ce9ad3764454ef"} Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.366906 4705 scope.go:117] "RemoveContainer" containerID="ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.367054 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.368004 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407298 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407692 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407702 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407713 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407736 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407744 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407754 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.407762 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mczsh\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-kube-api-access-mczsh\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.444892 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-config-data" (OuterVolumeSpecName: "config-data") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.465581 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.485137 4705 scope.go:117] "RemoveContainer" containerID="241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.498764 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-server-conf" (OuterVolumeSpecName: "server-conf") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.510208 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.510247 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.510264 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.520089 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" (UID: "6466e4f6-65ac-4f90-99a4-6cf7bc77bc57"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.527726 4705 scope.go:117] "RemoveContainer" containerID="ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980" Jan 24 08:05:51 crc kubenswrapper[4705]: E0124 08:05:51.528635 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980\": container with ID starting with ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980 not found: ID does not exist" containerID="ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.528669 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980"} err="failed to get container status \"ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980\": rpc error: code = NotFound desc = could not find container \"ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980\": container with ID starting with ee262a212cdce8acb6049a406b677521701c9877efbea4bda061e8326ba3c980 not found: ID does not exist" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.528695 4705 scope.go:117] "RemoveContainer" containerID="241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142" Jan 24 08:05:51 crc kubenswrapper[4705]: E0124 08:05:51.529743 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142\": container with ID starting with 241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142 not found: ID does not exist" containerID="241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.529772 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142"} err="failed to get container status \"241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142\": rpc error: code = NotFound desc = could not find container \"241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142\": container with ID starting with 241936c95010f43a17b312fb3af3043d1d415f93016b4bb8c8bd6c4b877be142 not found: ID does not exist" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.611858 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.729229 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.743663 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.777351 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 08:05:51 crc kubenswrapper[4705]: E0124 08:05:51.777869 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerName="setup-container" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.777891 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerName="setup-container" Jan 24 08:05:51 crc kubenswrapper[4705]: E0124 08:05:51.777922 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerName="rabbitmq" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.777930 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerName="rabbitmq" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.778157 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" containerName="rabbitmq" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.779501 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.785964 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.785993 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.786013 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.785967 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4gkrp" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.787480 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.790289 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.796053 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.798104 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.917045 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.917155 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.917495 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/203f66be-7cf6-4664-a0a8-9ed975352414-pod-info\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.917564 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/203f66be-7cf6-4664-a0a8-9ed975352414-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.917753 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.917777 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-server-conf\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.917860 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjhd\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-kube-api-access-xpjhd\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.918133 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.918190 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.918289 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:51 crc kubenswrapper[4705]: I0124 08:05:51.918324 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-config-data\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.020443 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.020505 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.020582 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.020605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-config-data\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.020653 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.020708 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.020722 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.021112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/203f66be-7cf6-4664-a0a8-9ed975352414-pod-info\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.021187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/203f66be-7cf6-4664-a0a8-9ed975352414-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.021422 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.021461 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-server-conf\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.021486 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpjhd\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-kube-api-access-xpjhd\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.021707 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.022029 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.022306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-config-data\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.024106 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-server-conf\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.025761 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/203f66be-7cf6-4664-a0a8-9ed975352414-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.026335 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/203f66be-7cf6-4664-a0a8-9ed975352414-pod-info\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.026760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.028275 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.030336 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/203f66be-7cf6-4664-a0a8-9ed975352414-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.043981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpjhd\" (UniqueName: \"kubernetes.io/projected/203f66be-7cf6-4664-a0a8-9ed975352414-kube-api-access-xpjhd\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.381839 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"203f66be-7cf6-4664-a0a8-9ed975352414\") " pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.412957 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.435126 4705 generic.go:334] "Generic (PLEG): container finished" podID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerID="025cbcf5ea07007fc61cf1337223083ce6bad8d6533d249f649ca891701fdc71" exitCode=0 Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.435224 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14a437d6-0b75-49b5-a509-e9dd8beefa45","Type":"ContainerDied","Data":"025cbcf5ea07007fc61cf1337223083ce6bad8d6533d249f649ca891701fdc71"} Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.754769 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875209 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-erlang-cookie\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875395 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-config-data\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875545 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-server-conf\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875636 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875697 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-confd\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875728 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a437d6-0b75-49b5-a509-e9dd8beefa45-erlang-cookie-secret\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875784 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2th7\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-kube-api-access-p2th7\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875845 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-tls\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875897 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-plugins\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.875963 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a437d6-0b75-49b5-a509-e9dd8beefa45-pod-info\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.876023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-plugins-conf\") pod \"14a437d6-0b75-49b5-a509-e9dd8beefa45\" (UID: \"14a437d6-0b75-49b5-a509-e9dd8beefa45\") " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.877842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.878420 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.886548 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.901012 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a437d6-0b75-49b5-a509-e9dd8beefa45-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.918328 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.918602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/14a437d6-0b75-49b5-a509-e9dd8beefa45-pod-info" (OuterVolumeSpecName: "pod-info") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.919993 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.980214 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a437d6-0b75-49b5-a509-e9dd8beefa45-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.980251 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.980267 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.980291 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.980301 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a437d6-0b75-49b5-a509-e9dd8beefa45-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.980311 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:52 crc kubenswrapper[4705]: I0124 08:05:52.980318 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.003910 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-kube-api-access-p2th7" (OuterVolumeSpecName: "kube-api-access-p2th7") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "kube-api-access-p2th7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.038565 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-config-data" (OuterVolumeSpecName: "config-data") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.066007 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.071235 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.085947 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.086022 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.086034 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2th7\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-kube-api-access-p2th7\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.095118 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-server-conf" (OuterVolumeSpecName: "server-conf") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.181737 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "14a437d6-0b75-49b5-a509-e9dd8beefa45" (UID: "14a437d6-0b75-49b5-a509-e9dd8beefa45"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.188074 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a437d6-0b75-49b5-a509-e9dd8beefa45-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.188114 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a437d6-0b75-49b5-a509-e9dd8beefa45-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.451645 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"203f66be-7cf6-4664-a0a8-9ed975352414","Type":"ContainerStarted","Data":"e513751f6ef671c84665df512a4a8dd2978e2010659d363e1e4516202f0f9254"} Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.454898 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14a437d6-0b75-49b5-a509-e9dd8beefa45","Type":"ContainerDied","Data":"c4db1c9b8a80cca84f423ad03382da94d3c292da3a1e65e6daad0afa255cf58a"} Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.454978 4705 scope.go:117] "RemoveContainer" containerID="025cbcf5ea07007fc61cf1337223083ce6bad8d6533d249f649ca891701fdc71" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.454976 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.499907 4705 scope.go:117] "RemoveContainer" containerID="3b1c3bad40fd8d7a85b987d091c4cf75d636a9ba03e19f88b0e3a1f4db4f1716" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.519364 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.566619 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.615344 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" path="/var/lib/kubelet/pods/14a437d6-0b75-49b5-a509-e9dd8beefa45/volumes" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.616754 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6466e4f6-65ac-4f90-99a4-6cf7bc77bc57" path="/var/lib/kubelet/pods/6466e4f6-65ac-4f90-99a4-6cf7bc77bc57/volumes" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.620431 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 08:05:53 crc kubenswrapper[4705]: E0124 08:05:53.621022 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerName="rabbitmq" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.621046 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerName="rabbitmq" Jan 24 08:05:53 crc kubenswrapper[4705]: E0124 08:05:53.621097 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerName="setup-container" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.621109 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerName="setup-container" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.621433 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a437d6-0b75-49b5-a509-e9dd8beefa45" containerName="rabbitmq" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.630536 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.630706 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.798164 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.798195 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.798210 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.798273 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.798516 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.798602 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.798669 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5hpl7" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.885732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.885784 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.885809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.885867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.887569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmd67\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-kube-api-access-wmd67\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.887776 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.887861 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42a4eca6-7e02-48d4-a187-ea503285c378-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.888084 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42a4eca6-7e02-48d4-a187-ea503285c378-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.888119 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.888168 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.888578 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.990540 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.990634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmd67\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-kube-api-access-wmd67\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.990695 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.990730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42a4eca6-7e02-48d4-a187-ea503285c378-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.990811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42a4eca6-7e02-48d4-a187-ea503285c378-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.990849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.991467 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.990884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.991540 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.991982 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.992617 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.992955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.996148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42a4eca6-7e02-48d4-a187-ea503285c378-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.996918 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42a4eca6-7e02-48d4-a187-ea503285c378-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.998982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:53 crc kubenswrapper[4705]: I0124 08:05:53.999007 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.010960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmd67\" (UniqueName: \"kubernetes.io/projected/42a4eca6-7e02-48d4-a187-ea503285c378-kube-api-access-wmd67\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.011052 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.011112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.011157 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.012031 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42a4eca6-7e02-48d4-a187-ea503285c378-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.012287 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42a4eca6-7e02-48d4-a187-ea503285c378-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.040100 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42a4eca6-7e02-48d4-a187-ea503285c378\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.150305 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.220637 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-w27nm"] Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.222309 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.226144 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.235444 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-w27nm"] Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.383674 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-w27nm"] Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.420934 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-config\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.420991 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.421021 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.421098 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.421126 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.421157 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.421179 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtw76\" (UniqueName: \"kubernetes.io/projected/9af512e6-0c42-4c2f-9b32-e648ab04b008-kube-api-access-rtw76\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.427370 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-ks5p8"] Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.429451 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.463758 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-ks5p8"] Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.527748 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.527811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.527895 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.527968 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-config\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528007 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528086 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slc5f\" (UniqueName: \"kubernetes.io/projected/333dd8c4-e753-48ab-be34-640378c23251-kube-api-access-slc5f\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528199 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528225 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528252 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528295 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtw76\" (UniqueName: \"kubernetes.io/projected/9af512e6-0c42-4c2f-9b32-e648ab04b008-kube-api-access-rtw76\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528370 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.528444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-config\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.529026 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.529128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.529280 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-config\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.529672 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.529960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.531772 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.554342 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtw76\" (UniqueName: \"kubernetes.io/projected/9af512e6-0c42-4c2f-9b32-e648ab04b008-kube-api-access-rtw76\") pod \"dnsmasq-dns-5b75489c6f-w27nm\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.630645 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.630802 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-config\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.630890 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.630979 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slc5f\" (UniqueName: \"kubernetes.io/projected/333dd8c4-e753-48ab-be34-640378c23251-kube-api-access-slc5f\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.631089 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.631106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.631238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.632566 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.633020 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-config\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.633422 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.633456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.634232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.634272 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333dd8c4-e753-48ab-be34-640378c23251-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.652719 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slc5f\" (UniqueName: \"kubernetes.io/projected/333dd8c4-e753-48ab-be34-640378c23251-kube-api-access-slc5f\") pod \"dnsmasq-dns-5d75f767dc-ks5p8\" (UID: \"333dd8c4-e753-48ab-be34-640378c23251\") " pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:54 crc kubenswrapper[4705]: I0124 08:05:54.726272 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 08:05:54 crc kubenswrapper[4705]: W0124 08:05:54.726886 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a4eca6_7e02_48d4_a187_ea503285c378.slice/crio-4a100ccb8fc86f78f83360ee9ccaa641da10158fc3b68f74c2a3677962162778 WatchSource:0}: Error finding container 4a100ccb8fc86f78f83360ee9ccaa641da10158fc3b68f74c2a3677962162778: Status 404 returned error can't find the container with id 4a100ccb8fc86f78f83360ee9ccaa641da10158fc3b68f74c2a3677962162778 Jan 24 08:05:55 crc kubenswrapper[4705]: I0124 08:05:55.213223 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:55 crc kubenswrapper[4705]: I0124 08:05:55.236809 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:55 crc kubenswrapper[4705]: I0124 08:05:55.497096 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42a4eca6-7e02-48d4-a187-ea503285c378","Type":"ContainerStarted","Data":"4a100ccb8fc86f78f83360ee9ccaa641da10158fc3b68f74c2a3677962162778"} Jan 24 08:05:55 crc kubenswrapper[4705]: I0124 08:05:55.843533 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-w27nm"] Jan 24 08:05:55 crc kubenswrapper[4705]: I0124 08:05:55.933301 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-ks5p8"] Jan 24 08:05:56 crc kubenswrapper[4705]: I0124 08:05:56.507602 4705 generic.go:334] "Generic (PLEG): container finished" podID="9af512e6-0c42-4c2f-9b32-e648ab04b008" containerID="2ee933361681357c864cc417ea7e777cda4bfd752f093ac06342a1313af01139" exitCode=0 Jan 24 08:05:56 crc kubenswrapper[4705]: I0124 08:05:56.507692 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" event={"ID":"9af512e6-0c42-4c2f-9b32-e648ab04b008","Type":"ContainerDied","Data":"2ee933361681357c864cc417ea7e777cda4bfd752f093ac06342a1313af01139"} Jan 24 08:05:56 crc kubenswrapper[4705]: I0124 08:05:56.507724 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" event={"ID":"9af512e6-0c42-4c2f-9b32-e648ab04b008","Type":"ContainerStarted","Data":"b97335d65afc3d76b3e029483ca1b00dcd0d3ba2e69fba4687699ca472ead85f"} Jan 24 08:05:56 crc kubenswrapper[4705]: I0124 08:05:56.509639 4705 generic.go:334] "Generic (PLEG): container finished" podID="333dd8c4-e753-48ab-be34-640378c23251" containerID="ff05aa8765634237729a7f1503564ea080385516936d58d7618a79ffe9239987" exitCode=0 Jan 24 08:05:56 crc kubenswrapper[4705]: I0124 08:05:56.509681 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" event={"ID":"333dd8c4-e753-48ab-be34-640378c23251","Type":"ContainerDied","Data":"ff05aa8765634237729a7f1503564ea080385516936d58d7618a79ffe9239987"} Jan 24 08:05:56 crc kubenswrapper[4705]: I0124 08:05:56.509707 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" event={"ID":"333dd8c4-e753-48ab-be34-640378c23251","Type":"ContainerStarted","Data":"79c795a01a1cf3e3e4bf7d1f51d36a7f1c977a490268cdd0a45777ddae728071"} Jan 24 08:05:56 crc kubenswrapper[4705]: I0124 08:05:56.909552 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.082169 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtw76\" (UniqueName: \"kubernetes.io/projected/9af512e6-0c42-4c2f-9b32-e648ab04b008-kube-api-access-rtw76\") pod \"9af512e6-0c42-4c2f-9b32-e648ab04b008\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.082549 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-openstack-edpm-ipam\") pod \"9af512e6-0c42-4c2f-9b32-e648ab04b008\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.082603 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-sb\") pod \"9af512e6-0c42-4c2f-9b32-e648ab04b008\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.082710 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-swift-storage-0\") pod \"9af512e6-0c42-4c2f-9b32-e648ab04b008\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.082750 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-nb\") pod \"9af512e6-0c42-4c2f-9b32-e648ab04b008\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.082793 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-svc\") pod \"9af512e6-0c42-4c2f-9b32-e648ab04b008\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.082838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-config\") pod \"9af512e6-0c42-4c2f-9b32-e648ab04b008\" (UID: \"9af512e6-0c42-4c2f-9b32-e648ab04b008\") " Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.088066 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af512e6-0c42-4c2f-9b32-e648ab04b008-kube-api-access-rtw76" (OuterVolumeSpecName: "kube-api-access-rtw76") pod "9af512e6-0c42-4c2f-9b32-e648ab04b008" (UID: "9af512e6-0c42-4c2f-9b32-e648ab04b008"). InnerVolumeSpecName "kube-api-access-rtw76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.108430 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9af512e6-0c42-4c2f-9b32-e648ab04b008" (UID: "9af512e6-0c42-4c2f-9b32-e648ab04b008"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.110395 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-config" (OuterVolumeSpecName: "config") pod "9af512e6-0c42-4c2f-9b32-e648ab04b008" (UID: "9af512e6-0c42-4c2f-9b32-e648ab04b008"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.111231 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9af512e6-0c42-4c2f-9b32-e648ab04b008" (UID: "9af512e6-0c42-4c2f-9b32-e648ab04b008"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.111750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9af512e6-0c42-4c2f-9b32-e648ab04b008" (UID: "9af512e6-0c42-4c2f-9b32-e648ab04b008"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.116152 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9af512e6-0c42-4c2f-9b32-e648ab04b008" (UID: "9af512e6-0c42-4c2f-9b32-e648ab04b008"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.133230 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9af512e6-0c42-4c2f-9b32-e648ab04b008" (UID: "9af512e6-0c42-4c2f-9b32-e648ab04b008"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.184845 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.184887 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.184900 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.184911 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.184922 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtw76\" (UniqueName: \"kubernetes.io/projected/9af512e6-0c42-4c2f-9b32-e648ab04b008-kube-api-access-rtw76\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.184933 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.184943 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af512e6-0c42-4c2f-9b32-e648ab04b008-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.761493 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.761526 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" event={"ID":"333dd8c4-e753-48ab-be34-640378c23251","Type":"ContainerStarted","Data":"e3bdce29abef1e0b3c6e59e11e50d494d6e00135e874a33897bc7fc9ac10e874"} Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.761920 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.761872 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-w27nm" event={"ID":"9af512e6-0c42-4c2f-9b32-e648ab04b008","Type":"ContainerDied","Data":"b97335d65afc3d76b3e029483ca1b00dcd0d3ba2e69fba4687699ca472ead85f"} Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.762443 4705 scope.go:117] "RemoveContainer" containerID="2ee933361681357c864cc417ea7e777cda4bfd752f093ac06342a1313af01139" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.763781 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42a4eca6-7e02-48d4-a187-ea503285c378","Type":"ContainerStarted","Data":"892b2fdba7543b7357a561fe5e1e8d80139978ad85eb99f122a4c0b5a76f9cfb"} Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.789551 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" podStartSLOduration=3.7895313919999998 podStartE2EDuration="3.789531392s" podCreationTimestamp="2026-01-24 08:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:05:57.781399078 +0000 UTC m=+1496.501272376" watchObservedRunningTime="2026-01-24 08:05:57.789531392 +0000 UTC m=+1496.509404680" Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.915426 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-w27nm"] Jan 24 08:05:57 crc kubenswrapper[4705]: I0124 08:05:57.925945 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-w27nm"] Jan 24 08:05:58 crc kubenswrapper[4705]: I0124 08:05:58.776079 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"203f66be-7cf6-4664-a0a8-9ed975352414","Type":"ContainerStarted","Data":"7f1f1f015f6f44bbf1c432f307a04b1c27b225866fc4a93a45598a4df9d86b71"} Jan 24 08:05:59 crc kubenswrapper[4705]: I0124 08:05:59.587066 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af512e6-0c42-4c2f-9b32-e648ab04b008" path="/var/lib/kubelet/pods/9af512e6-0c42-4c2f-9b32-e648ab04b008/volumes" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.239188 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-ks5p8" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.308803 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-f8dq6"] Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.309104 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" podUID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerName="dnsmasq-dns" containerID="cri-o://dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95" gracePeriod=10 Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.814887 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.855178 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-config\") pod \"621f4cfa-d068-40f9-8a4e-573c5271f499\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.855230 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94nw5\" (UniqueName: \"kubernetes.io/projected/621f4cfa-d068-40f9-8a4e-573c5271f499-kube-api-access-94nw5\") pod \"621f4cfa-d068-40f9-8a4e-573c5271f499\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.855313 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-sb\") pod \"621f4cfa-d068-40f9-8a4e-573c5271f499\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.855403 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-swift-storage-0\") pod \"621f4cfa-d068-40f9-8a4e-573c5271f499\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.855482 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-svc\") pod \"621f4cfa-d068-40f9-8a4e-573c5271f499\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.855504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-nb\") pod \"621f4cfa-d068-40f9-8a4e-573c5271f499\" (UID: \"621f4cfa-d068-40f9-8a4e-573c5271f499\") " Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.899013 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621f4cfa-d068-40f9-8a4e-573c5271f499-kube-api-access-94nw5" (OuterVolumeSpecName: "kube-api-access-94nw5") pod "621f4cfa-d068-40f9-8a4e-573c5271f499" (UID: "621f4cfa-d068-40f9-8a4e-573c5271f499"). InnerVolumeSpecName "kube-api-access-94nw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.925122 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "621f4cfa-d068-40f9-8a4e-573c5271f499" (UID: "621f4cfa-d068-40f9-8a4e-573c5271f499"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.925173 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-config" (OuterVolumeSpecName: "config") pod "621f4cfa-d068-40f9-8a4e-573c5271f499" (UID: "621f4cfa-d068-40f9-8a4e-573c5271f499"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.932566 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "621f4cfa-d068-40f9-8a4e-573c5271f499" (UID: "621f4cfa-d068-40f9-8a4e-573c5271f499"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.938489 4705 generic.go:334] "Generic (PLEG): container finished" podID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerID="dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95" exitCode=0 Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.938550 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.938573 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" event={"ID":"621f4cfa-d068-40f9-8a4e-573c5271f499","Type":"ContainerDied","Data":"dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95"} Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.940301 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-f8dq6" event={"ID":"621f4cfa-d068-40f9-8a4e-573c5271f499","Type":"ContainerDied","Data":"cbd390c6f4dd3dc28dcc4262934a526ba5f1b8976758b8ee40a2b605f52becbe"} Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.940396 4705 scope.go:117] "RemoveContainer" containerID="dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.953012 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "621f4cfa-d068-40f9-8a4e-573c5271f499" (UID: "621f4cfa-d068-40f9-8a4e-573c5271f499"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.957338 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.957371 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.957382 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.957394 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94nw5\" (UniqueName: \"kubernetes.io/projected/621f4cfa-d068-40f9-8a4e-573c5271f499-kube-api-access-94nw5\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.957404 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.957898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "621f4cfa-d068-40f9-8a4e-573c5271f499" (UID: "621f4cfa-d068-40f9-8a4e-573c5271f499"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:06:05 crc kubenswrapper[4705]: I0124 08:06:05.983614 4705 scope.go:117] "RemoveContainer" containerID="0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633" Jan 24 08:06:06 crc kubenswrapper[4705]: I0124 08:06:06.003737 4705 scope.go:117] "RemoveContainer" containerID="dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95" Jan 24 08:06:06 crc kubenswrapper[4705]: E0124 08:06:06.004213 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95\": container with ID starting with dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95 not found: ID does not exist" containerID="dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95" Jan 24 08:06:06 crc kubenswrapper[4705]: I0124 08:06:06.004254 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95"} err="failed to get container status \"dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95\": rpc error: code = NotFound desc = could not find container \"dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95\": container with ID starting with dc890f9d3b7361c251538cd44069211d625c74400eba5d2d69beb6cdfa448e95 not found: ID does not exist" Jan 24 08:06:06 crc kubenswrapper[4705]: I0124 08:06:06.004279 4705 scope.go:117] "RemoveContainer" containerID="0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633" Jan 24 08:06:06 crc kubenswrapper[4705]: E0124 08:06:06.004548 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633\": container with ID starting with 0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633 not found: ID does not exist" containerID="0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633" Jan 24 08:06:06 crc kubenswrapper[4705]: I0124 08:06:06.004599 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633"} err="failed to get container status \"0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633\": rpc error: code = NotFound desc = could not find container \"0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633\": container with ID starting with 0ef5617b6bc7a20e7c1600599eaa235c25692d197cb1ab68964b1accde93a633 not found: ID does not exist" Jan 24 08:06:06 crc kubenswrapper[4705]: I0124 08:06:06.059039 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621f4cfa-d068-40f9-8a4e-573c5271f499-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:06 crc kubenswrapper[4705]: I0124 08:06:06.277594 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-f8dq6"] Jan 24 08:06:06 crc kubenswrapper[4705]: I0124 08:06:06.286008 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-f8dq6"] Jan 24 08:06:07 crc kubenswrapper[4705]: I0124 08:06:07.585692 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621f4cfa-d068-40f9-8a4e-573c5271f499" path="/var/lib/kubelet/pods/621f4cfa-d068-40f9-8a4e-573c5271f499/volumes" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.303293 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr"] Jan 24 08:06:13 crc kubenswrapper[4705]: E0124 08:06:13.304461 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerName="dnsmasq-dns" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.304480 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerName="dnsmasq-dns" Jan 24 08:06:13 crc kubenswrapper[4705]: E0124 08:06:13.304500 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af512e6-0c42-4c2f-9b32-e648ab04b008" containerName="init" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.304507 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af512e6-0c42-4c2f-9b32-e648ab04b008" containerName="init" Jan 24 08:06:13 crc kubenswrapper[4705]: E0124 08:06:13.304543 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerName="init" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.304553 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerName="init" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.304748 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="621f4cfa-d068-40f9-8a4e-573c5271f499" containerName="dnsmasq-dns" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.304773 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af512e6-0c42-4c2f-9b32-e648ab04b008" containerName="init" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.305578 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.307642 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.307847 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.310495 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.311125 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.319139 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr"] Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.461338 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxg69\" (UniqueName: \"kubernetes.io/projected/2b3c2835-0838-4592-ae5c-9d442ad0e351-kube-api-access-wxg69\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.461530 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.461582 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.461690 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.563289 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.563439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxg69\" (UniqueName: \"kubernetes.io/projected/2b3c2835-0838-4592-ae5c-9d442ad0e351-kube-api-access-wxg69\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.563525 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.563558 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.569433 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.571540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.576697 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.581406 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxg69\" (UniqueName: \"kubernetes.io/projected/2b3c2835-0838-4592-ae5c-9d442ad0e351-kube-api-access-wxg69\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:13 crc kubenswrapper[4705]: I0124 08:06:13.635458 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:14 crc kubenswrapper[4705]: W0124 08:06:14.145579 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3c2835_0838_4592_ae5c_9d442ad0e351.slice/crio-49e6dd1b256ef7533dfd0f5e303c7c3a11103ca136e44117e37b395f95a2e8e0 WatchSource:0}: Error finding container 49e6dd1b256ef7533dfd0f5e303c7c3a11103ca136e44117e37b395f95a2e8e0: Status 404 returned error can't find the container with id 49e6dd1b256ef7533dfd0f5e303c7c3a11103ca136e44117e37b395f95a2e8e0 Jan 24 08:06:14 crc kubenswrapper[4705]: I0124 08:06:14.149435 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr"] Jan 24 08:06:15 crc kubenswrapper[4705]: I0124 08:06:15.027710 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" event={"ID":"2b3c2835-0838-4592-ae5c-9d442ad0e351","Type":"ContainerStarted","Data":"49e6dd1b256ef7533dfd0f5e303c7c3a11103ca136e44117e37b395f95a2e8e0"} Jan 24 08:06:23 crc kubenswrapper[4705]: I0124 08:06:23.104131 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" event={"ID":"2b3c2835-0838-4592-ae5c-9d442ad0e351","Type":"ContainerStarted","Data":"dc4915cb4af6af5313332a62bfc8cccc64ba006673cbc0a640c9497e12e7fcee"} Jan 24 08:06:23 crc kubenswrapper[4705]: I0124 08:06:23.127300 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" podStartSLOduration=1.647793031 podStartE2EDuration="10.127280562s" podCreationTimestamp="2026-01-24 08:06:13 +0000 UTC" firstStartedPulling="2026-01-24 08:06:14.150865752 +0000 UTC m=+1512.870739040" lastFinishedPulling="2026-01-24 08:06:22.630353283 +0000 UTC m=+1521.350226571" observedRunningTime="2026-01-24 08:06:23.118648726 +0000 UTC m=+1521.838522014" watchObservedRunningTime="2026-01-24 08:06:23.127280562 +0000 UTC m=+1521.847153850" Jan 24 08:06:29 crc kubenswrapper[4705]: I0124 08:06:29.167941 4705 generic.go:334] "Generic (PLEG): container finished" podID="42a4eca6-7e02-48d4-a187-ea503285c378" containerID="892b2fdba7543b7357a561fe5e1e8d80139978ad85eb99f122a4c0b5a76f9cfb" exitCode=0 Jan 24 08:06:29 crc kubenswrapper[4705]: I0124 08:06:29.168045 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42a4eca6-7e02-48d4-a187-ea503285c378","Type":"ContainerDied","Data":"892b2fdba7543b7357a561fe5e1e8d80139978ad85eb99f122a4c0b5a76f9cfb"} Jan 24 08:06:30 crc kubenswrapper[4705]: I0124 08:06:30.179297 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42a4eca6-7e02-48d4-a187-ea503285c378","Type":"ContainerStarted","Data":"cd7e43be557aee1873c87486dfc30bf75bf6e60eadedf9575610bc83af73da58"} Jan 24 08:06:30 crc kubenswrapper[4705]: I0124 08:06:30.179785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:06:30 crc kubenswrapper[4705]: I0124 08:06:30.201544 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.201526757 podStartE2EDuration="37.201526757s" podCreationTimestamp="2026-01-24 08:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:06:30.198596624 +0000 UTC m=+1528.918469912" watchObservedRunningTime="2026-01-24 08:06:30.201526757 +0000 UTC m=+1528.921400045" Jan 24 08:06:35 crc kubenswrapper[4705]: I0124 08:06:35.224784 4705 generic.go:334] "Generic (PLEG): container finished" podID="203f66be-7cf6-4664-a0a8-9ed975352414" containerID="7f1f1f015f6f44bbf1c432f307a04b1c27b225866fc4a93a45598a4df9d86b71" exitCode=0 Jan 24 08:06:35 crc kubenswrapper[4705]: I0124 08:06:35.224883 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"203f66be-7cf6-4664-a0a8-9ed975352414","Type":"ContainerDied","Data":"7f1f1f015f6f44bbf1c432f307a04b1c27b225866fc4a93a45598a4df9d86b71"} Jan 24 08:06:35 crc kubenswrapper[4705]: I0124 08:06:35.227530 4705 generic.go:334] "Generic (PLEG): container finished" podID="2b3c2835-0838-4592-ae5c-9d442ad0e351" containerID="dc4915cb4af6af5313332a62bfc8cccc64ba006673cbc0a640c9497e12e7fcee" exitCode=0 Jan 24 08:06:35 crc kubenswrapper[4705]: I0124 08:06:35.227561 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" event={"ID":"2b3c2835-0838-4592-ae5c-9d442ad0e351","Type":"ContainerDied","Data":"dc4915cb4af6af5313332a62bfc8cccc64ba006673cbc0a640c9497e12e7fcee"} Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.253527 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"203f66be-7cf6-4664-a0a8-9ed975352414","Type":"ContainerStarted","Data":"c87ba32c7d5779cea6919172a3f694a8e41d7223ee9658b6b6f53ed3128d0f28"} Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.254519 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.284410 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.284390277 podStartE2EDuration="45.284390277s" podCreationTimestamp="2026-01-24 08:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:06:36.275965536 +0000 UTC m=+1534.995838834" watchObservedRunningTime="2026-01-24 08:06:36.284390277 +0000 UTC m=+1535.004263565" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.738260 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.836074 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-inventory\") pod \"2b3c2835-0838-4592-ae5c-9d442ad0e351\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.836121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxg69\" (UniqueName: \"kubernetes.io/projected/2b3c2835-0838-4592-ae5c-9d442ad0e351-kube-api-access-wxg69\") pod \"2b3c2835-0838-4592-ae5c-9d442ad0e351\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.836194 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-ssh-key-openstack-edpm-ipam\") pod \"2b3c2835-0838-4592-ae5c-9d442ad0e351\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.836233 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-repo-setup-combined-ca-bundle\") pod \"2b3c2835-0838-4592-ae5c-9d442ad0e351\" (UID: \"2b3c2835-0838-4592-ae5c-9d442ad0e351\") " Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.846222 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3c2835-0838-4592-ae5c-9d442ad0e351-kube-api-access-wxg69" (OuterVolumeSpecName: "kube-api-access-wxg69") pod "2b3c2835-0838-4592-ae5c-9d442ad0e351" (UID: "2b3c2835-0838-4592-ae5c-9d442ad0e351"). InnerVolumeSpecName "kube-api-access-wxg69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.846342 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2b3c2835-0838-4592-ae5c-9d442ad0e351" (UID: "2b3c2835-0838-4592-ae5c-9d442ad0e351"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.867106 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b3c2835-0838-4592-ae5c-9d442ad0e351" (UID: "2b3c2835-0838-4592-ae5c-9d442ad0e351"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.869157 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-inventory" (OuterVolumeSpecName: "inventory") pod "2b3c2835-0838-4592-ae5c-9d442ad0e351" (UID: "2b3c2835-0838-4592-ae5c-9d442ad0e351"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.938317 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.938349 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxg69\" (UniqueName: \"kubernetes.io/projected/2b3c2835-0838-4592-ae5c-9d442ad0e351-kube-api-access-wxg69\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.938360 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:36 crc kubenswrapper[4705]: I0124 08:06:36.938373 4705 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3c2835-0838-4592-ae5c-9d442ad0e351-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.263341 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" event={"ID":"2b3c2835-0838-4592-ae5c-9d442ad0e351","Type":"ContainerDied","Data":"49e6dd1b256ef7533dfd0f5e303c7c3a11103ca136e44117e37b395f95a2e8e0"} Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.263671 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e6dd1b256ef7533dfd0f5e303c7c3a11103ca136e44117e37b395f95a2e8e0" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.263361 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.380298 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j"] Jan 24 08:06:37 crc kubenswrapper[4705]: E0124 08:06:37.380739 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3c2835-0838-4592-ae5c-9d442ad0e351" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.380767 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3c2835-0838-4592-ae5c-9d442ad0e351" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.380991 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3c2835-0838-4592-ae5c-9d442ad0e351" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.381638 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.385308 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.385413 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.385525 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.385833 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.400596 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j"] Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.548970 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5jmb\" (UniqueName: \"kubernetes.io/projected/3579a044-5429-43aa-be25-6720cbb84d82-kube-api-access-w5jmb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.549055 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.549136 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.653734 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5jmb\" (UniqueName: \"kubernetes.io/projected/3579a044-5429-43aa-be25-6720cbb84d82-kube-api-access-w5jmb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.653966 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.654276 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.659072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.662512 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.676242 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5jmb\" (UniqueName: \"kubernetes.io/projected/3579a044-5429-43aa-be25-6720cbb84d82-kube-api-access-w5jmb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xfn7j\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:37 crc kubenswrapper[4705]: I0124 08:06:37.697478 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:38 crc kubenswrapper[4705]: I0124 08:06:38.238787 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j"] Jan 24 08:06:38 crc kubenswrapper[4705]: I0124 08:06:38.282301 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" event={"ID":"3579a044-5429-43aa-be25-6720cbb84d82","Type":"ContainerStarted","Data":"baeec343c84c9eab210d9e1c72de01e0831b9fe0e2a1f64c0d525ba1b2d04bd5"} Jan 24 08:06:39 crc kubenswrapper[4705]: I0124 08:06:39.292163 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" event={"ID":"3579a044-5429-43aa-be25-6720cbb84d82","Type":"ContainerStarted","Data":"bae33a0d77e1575a7a4bf04485b118f4bf55443ceac5700e3c19110a54b02fb4"} Jan 24 08:06:39 crc kubenswrapper[4705]: I0124 08:06:39.316608 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" podStartSLOduration=1.8756578689999999 podStartE2EDuration="2.316585286s" podCreationTimestamp="2026-01-24 08:06:37 +0000 UTC" firstStartedPulling="2026-01-24 08:06:38.240304555 +0000 UTC m=+1536.960177843" lastFinishedPulling="2026-01-24 08:06:38.681231972 +0000 UTC m=+1537.401105260" observedRunningTime="2026-01-24 08:06:39.307508779 +0000 UTC m=+1538.027382077" watchObservedRunningTime="2026-01-24 08:06:39.316585286 +0000 UTC m=+1538.036458574" Jan 24 08:06:43 crc kubenswrapper[4705]: I0124 08:06:43.330703 4705 generic.go:334] "Generic (PLEG): container finished" podID="3579a044-5429-43aa-be25-6720cbb84d82" containerID="bae33a0d77e1575a7a4bf04485b118f4bf55443ceac5700e3c19110a54b02fb4" exitCode=0 Jan 24 08:06:43 crc kubenswrapper[4705]: I0124 08:06:43.330898 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" event={"ID":"3579a044-5429-43aa-be25-6720cbb84d82","Type":"ContainerDied","Data":"bae33a0d77e1575a7a4bf04485b118f4bf55443ceac5700e3c19110a54b02fb4"} Jan 24 08:06:44 crc kubenswrapper[4705]: I0124 08:06:44.154070 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.016258 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.171690 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5jmb\" (UniqueName: \"kubernetes.io/projected/3579a044-5429-43aa-be25-6720cbb84d82-kube-api-access-w5jmb\") pod \"3579a044-5429-43aa-be25-6720cbb84d82\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.171870 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-ssh-key-openstack-edpm-ipam\") pod \"3579a044-5429-43aa-be25-6720cbb84d82\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.171984 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-inventory\") pod \"3579a044-5429-43aa-be25-6720cbb84d82\" (UID: \"3579a044-5429-43aa-be25-6720cbb84d82\") " Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.179193 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3579a044-5429-43aa-be25-6720cbb84d82-kube-api-access-w5jmb" (OuterVolumeSpecName: "kube-api-access-w5jmb") pod "3579a044-5429-43aa-be25-6720cbb84d82" (UID: "3579a044-5429-43aa-be25-6720cbb84d82"). InnerVolumeSpecName "kube-api-access-w5jmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.206759 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-inventory" (OuterVolumeSpecName: "inventory") pod "3579a044-5429-43aa-be25-6720cbb84d82" (UID: "3579a044-5429-43aa-be25-6720cbb84d82"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.221627 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3579a044-5429-43aa-be25-6720cbb84d82" (UID: "3579a044-5429-43aa-be25-6720cbb84d82"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.275447 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5jmb\" (UniqueName: \"kubernetes.io/projected/3579a044-5429-43aa-be25-6720cbb84d82-kube-api-access-w5jmb\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.275651 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.275754 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3579a044-5429-43aa-be25-6720cbb84d82-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.350797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" event={"ID":"3579a044-5429-43aa-be25-6720cbb84d82","Type":"ContainerDied","Data":"baeec343c84c9eab210d9e1c72de01e0831b9fe0e2a1f64c0d525ba1b2d04bd5"} Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.351112 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baeec343c84c9eab210d9e1c72de01e0831b9fe0e2a1f64c0d525ba1b2d04bd5" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.350888 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xfn7j" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.433304 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t"] Jan 24 08:06:45 crc kubenswrapper[4705]: E0124 08:06:45.433696 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3579a044-5429-43aa-be25-6720cbb84d82" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.433709 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3579a044-5429-43aa-be25-6720cbb84d82" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.433953 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3579a044-5429-43aa-be25-6720cbb84d82" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.434593 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.449751 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.449876 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.449939 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.449978 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.451781 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t"] Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.583440 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.583543 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvcl\" (UniqueName: \"kubernetes.io/projected/67b8ef17-3a9a-4ebc-af02-eb475e2304af-kube-api-access-6xvcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.583646 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.583728 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.685328 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.685415 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.685479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.685526 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvcl\" (UniqueName: \"kubernetes.io/projected/67b8ef17-3a9a-4ebc-af02-eb475e2304af-kube-api-access-6xvcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.690890 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.691156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.691376 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.704309 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvcl\" (UniqueName: \"kubernetes.io/projected/67b8ef17-3a9a-4ebc-af02-eb475e2304af-kube-api-access-6xvcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:45 crc kubenswrapper[4705]: I0124 08:06:45.762662 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:06:46 crc kubenswrapper[4705]: I0124 08:06:46.261208 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t"] Jan 24 08:06:46 crc kubenswrapper[4705]: I0124 08:06:46.360673 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" event={"ID":"67b8ef17-3a9a-4ebc-af02-eb475e2304af","Type":"ContainerStarted","Data":"5491b4536b285b31ad0f9e7202d441f44494c76d3d5cce84923d0debcf6ef4d5"} Jan 24 08:06:47 crc kubenswrapper[4705]: I0124 08:06:47.371696 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" event={"ID":"67b8ef17-3a9a-4ebc-af02-eb475e2304af","Type":"ContainerStarted","Data":"c9ebbbc67c3d121634e3ed819da4587e120ce950d758f0528096057a6fef1207"} Jan 24 08:06:48 crc kubenswrapper[4705]: I0124 08:06:48.403736 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" podStartSLOduration=2.964535293 podStartE2EDuration="3.403717207s" podCreationTimestamp="2026-01-24 08:06:45 +0000 UTC" firstStartedPulling="2026-01-24 08:06:46.266800617 +0000 UTC m=+1544.986673905" lastFinishedPulling="2026-01-24 08:06:46.705982531 +0000 UTC m=+1545.425855819" observedRunningTime="2026-01-24 08:06:48.39905614 +0000 UTC m=+1547.118929448" watchObservedRunningTime="2026-01-24 08:06:48.403717207 +0000 UTC m=+1547.123590495" Jan 24 08:06:52 crc kubenswrapper[4705]: I0124 08:06:52.417759 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 08:07:04 crc kubenswrapper[4705]: I0124 08:07:04.239960 4705 scope.go:117] "RemoveContainer" containerID="d97aa0b07a91a2345b7a139371b45c68f6552fc608b3b72c6226883c22cbb979" Jan 24 08:07:04 crc kubenswrapper[4705]: I0124 08:07:04.272928 4705 scope.go:117] "RemoveContainer" containerID="fb02679e21e1809eb5bdf67d85555d45c326c2eb881bba8e40b7ecc51023af4d" Jan 24 08:07:04 crc kubenswrapper[4705]: I0124 08:07:04.324091 4705 scope.go:117] "RemoveContainer" containerID="9e5fc067f6521054c987e891000d286cfdab47d360347586c669d52bf398e08e" Jan 24 08:07:04 crc kubenswrapper[4705]: I0124 08:07:04.396327 4705 scope.go:117] "RemoveContainer" containerID="c2d84db0817656c74058aacf22582805fb1818b7a4e2557e3795cf218c6f760c" Jan 24 08:07:07 crc kubenswrapper[4705]: I0124 08:07:07.071802 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:07:07 crc kubenswrapper[4705]: I0124 08:07:07.072179 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:07:37 crc kubenswrapper[4705]: I0124 08:07:37.071939 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:07:37 crc kubenswrapper[4705]: I0124 08:07:37.072555 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:08:04 crc kubenswrapper[4705]: I0124 08:08:04.532247 4705 scope.go:117] "RemoveContainer" containerID="a0a4d36097fa0829035badbb6b222043f52ffb9ea2ba7bc665f21f77dfe06563" Jan 24 08:08:04 crc kubenswrapper[4705]: I0124 08:08:04.560613 4705 scope.go:117] "RemoveContainer" containerID="b674941e119859e5e54ce45d0088310944f038d162d1e1382fc6e86307f7087b" Jan 24 08:08:07 crc kubenswrapper[4705]: I0124 08:08:07.071593 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:08:07 crc kubenswrapper[4705]: I0124 08:08:07.072007 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:08:07 crc kubenswrapper[4705]: I0124 08:08:07.072071 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:08:07 crc kubenswrapper[4705]: I0124 08:08:07.072995 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:08:07 crc kubenswrapper[4705]: I0124 08:08:07.073064 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" gracePeriod=600 Jan 24 08:08:07 crc kubenswrapper[4705]: E0124 08:08:07.822181 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:08:08 crc kubenswrapper[4705]: I0124 08:08:08.241843 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" exitCode=0 Jan 24 08:08:08 crc kubenswrapper[4705]: I0124 08:08:08.241904 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7"} Jan 24 08:08:08 crc kubenswrapper[4705]: I0124 08:08:08.241956 4705 scope.go:117] "RemoveContainer" containerID="775d2ed06dfb0c63c8b346ce0d06f95edde444c60e383f78b6d4ebabd731e08f" Jan 24 08:08:08 crc kubenswrapper[4705]: I0124 08:08:08.242717 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:08:08 crc kubenswrapper[4705]: E0124 08:08:08.243070 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.309296 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ff8r8"] Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.312612 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.324978 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff8r8"] Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.413735 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-catalog-content\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.414088 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-utilities\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.414199 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqkw\" (UniqueName: \"kubernetes.io/projected/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-kube-api-access-brqkw\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.517336 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-catalog-content\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.517463 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-utilities\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.517528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brqkw\" (UniqueName: \"kubernetes.io/projected/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-kube-api-access-brqkw\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.518587 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-catalog-content\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.518740 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-utilities\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.547057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brqkw\" (UniqueName: \"kubernetes.io/projected/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-kube-api-access-brqkw\") pod \"community-operators-ff8r8\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:13 crc kubenswrapper[4705]: I0124 08:08:13.631422 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:14 crc kubenswrapper[4705]: I0124 08:08:14.200211 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff8r8"] Jan 24 08:08:14 crc kubenswrapper[4705]: I0124 08:08:14.309767 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff8r8" event={"ID":"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330","Type":"ContainerStarted","Data":"d06634bf386f423cc32172d847711b6b54e995ec36275cf73cbf44d0b9b38e86"} Jan 24 08:08:15 crc kubenswrapper[4705]: I0124 08:08:15.320948 4705 generic.go:334] "Generic (PLEG): container finished" podID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerID="81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5" exitCode=0 Jan 24 08:08:15 crc kubenswrapper[4705]: I0124 08:08:15.320990 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff8r8" event={"ID":"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330","Type":"ContainerDied","Data":"81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5"} Jan 24 08:08:16 crc kubenswrapper[4705]: I0124 08:08:16.331429 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff8r8" event={"ID":"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330","Type":"ContainerStarted","Data":"81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee"} Jan 24 08:08:17 crc kubenswrapper[4705]: I0124 08:08:17.341636 4705 generic.go:334] "Generic (PLEG): container finished" podID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerID="81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee" exitCode=0 Jan 24 08:08:17 crc kubenswrapper[4705]: I0124 08:08:17.341714 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff8r8" event={"ID":"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330","Type":"ContainerDied","Data":"81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee"} Jan 24 08:08:18 crc kubenswrapper[4705]: I0124 08:08:18.353646 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff8r8" event={"ID":"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330","Type":"ContainerStarted","Data":"7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1"} Jan 24 08:08:18 crc kubenswrapper[4705]: I0124 08:08:18.380578 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ff8r8" podStartSLOduration=2.882301832 podStartE2EDuration="5.380555409s" podCreationTimestamp="2026-01-24 08:08:13 +0000 UTC" firstStartedPulling="2026-01-24 08:08:15.324094486 +0000 UTC m=+1634.043967774" lastFinishedPulling="2026-01-24 08:08:17.822348063 +0000 UTC m=+1636.542221351" observedRunningTime="2026-01-24 08:08:18.370642434 +0000 UTC m=+1637.090515722" watchObservedRunningTime="2026-01-24 08:08:18.380555409 +0000 UTC m=+1637.100428697" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.684018 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2mws"] Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.686523 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.695646 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2mws"] Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.855947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-catalog-content\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.856016 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-utilities\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.856088 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lz4c\" (UniqueName: \"kubernetes.io/projected/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-kube-api-access-6lz4c\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.958137 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-catalog-content\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.958551 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-catalog-content\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.958691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-utilities\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.958969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-utilities\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:20 crc kubenswrapper[4705]: I0124 08:08:20.959037 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lz4c\" (UniqueName: \"kubernetes.io/projected/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-kube-api-access-6lz4c\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:21 crc kubenswrapper[4705]: I0124 08:08:21.013019 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lz4c\" (UniqueName: \"kubernetes.io/projected/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-kube-api-access-6lz4c\") pod \"certified-operators-d2mws\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:21 crc kubenswrapper[4705]: I0124 08:08:21.305707 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:21 crc kubenswrapper[4705]: I0124 08:08:21.846551 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2mws"] Jan 24 08:08:22 crc kubenswrapper[4705]: I0124 08:08:22.395070 4705 generic.go:334] "Generic (PLEG): container finished" podID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerID="4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00" exitCode=0 Jan 24 08:08:22 crc kubenswrapper[4705]: I0124 08:08:22.395122 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2mws" event={"ID":"47578e17-3544-4de9-8fe8-8fbd6b28a4b3","Type":"ContainerDied","Data":"4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00"} Jan 24 08:08:22 crc kubenswrapper[4705]: I0124 08:08:22.395366 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2mws" event={"ID":"47578e17-3544-4de9-8fe8-8fbd6b28a4b3","Type":"ContainerStarted","Data":"3441b190ab39bc4103b04dfe28bb1830f170ef8ab44fdab1ba7fd2ec65677abe"} Jan 24 08:08:23 crc kubenswrapper[4705]: I0124 08:08:23.414404 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2mws" event={"ID":"47578e17-3544-4de9-8fe8-8fbd6b28a4b3","Type":"ContainerStarted","Data":"059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df"} Jan 24 08:08:23 crc kubenswrapper[4705]: I0124 08:08:23.576047 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:08:23 crc kubenswrapper[4705]: E0124 08:08:23.576391 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:08:23 crc kubenswrapper[4705]: I0124 08:08:23.631681 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:23 crc kubenswrapper[4705]: I0124 08:08:23.631746 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:23 crc kubenswrapper[4705]: I0124 08:08:23.685006 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:24 crc kubenswrapper[4705]: I0124 08:08:24.429727 4705 generic.go:334] "Generic (PLEG): container finished" podID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerID="059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df" exitCode=0 Jan 24 08:08:24 crc kubenswrapper[4705]: I0124 08:08:24.429863 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2mws" event={"ID":"47578e17-3544-4de9-8fe8-8fbd6b28a4b3","Type":"ContainerDied","Data":"059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df"} Jan 24 08:08:24 crc kubenswrapper[4705]: I0124 08:08:24.482930 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:25 crc kubenswrapper[4705]: I0124 08:08:25.442716 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2mws" event={"ID":"47578e17-3544-4de9-8fe8-8fbd6b28a4b3","Type":"ContainerStarted","Data":"07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2"} Jan 24 08:08:25 crc kubenswrapper[4705]: I0124 08:08:25.469651 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2mws" podStartSLOduration=2.954244134 podStartE2EDuration="5.469630935s" podCreationTimestamp="2026-01-24 08:08:20 +0000 UTC" firstStartedPulling="2026-01-24 08:08:22.397591141 +0000 UTC m=+1641.117464429" lastFinishedPulling="2026-01-24 08:08:24.912977942 +0000 UTC m=+1643.632851230" observedRunningTime="2026-01-24 08:08:25.464113222 +0000 UTC m=+1644.183986520" watchObservedRunningTime="2026-01-24 08:08:25.469630935 +0000 UTC m=+1644.189504223" Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.077152 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff8r8"] Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.452299 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ff8r8" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="registry-server" containerID="cri-o://7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1" gracePeriod=2 Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.908798 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.987063 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brqkw\" (UniqueName: \"kubernetes.io/projected/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-kube-api-access-brqkw\") pod \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.987159 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-utilities\") pod \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.987292 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-catalog-content\") pod \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\" (UID: \"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330\") " Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.987886 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-utilities" (OuterVolumeSpecName: "utilities") pod "d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" (UID: "d3b21ef6-a5fa-43a8-86ed-ea0dc7312330"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.988100 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:08:26 crc kubenswrapper[4705]: I0124 08:08:26.993522 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-kube-api-access-brqkw" (OuterVolumeSpecName: "kube-api-access-brqkw") pod "d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" (UID: "d3b21ef6-a5fa-43a8-86ed-ea0dc7312330"). InnerVolumeSpecName "kube-api-access-brqkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.038743 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" (UID: "d3b21ef6-a5fa-43a8-86ed-ea0dc7312330"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.090086 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brqkw\" (UniqueName: \"kubernetes.io/projected/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-kube-api-access-brqkw\") on node \"crc\" DevicePath \"\"" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.090366 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.462229 4705 generic.go:334] "Generic (PLEG): container finished" podID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerID="7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1" exitCode=0 Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.462307 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff8r8" event={"ID":"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330","Type":"ContainerDied","Data":"7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1"} Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.462349 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff8r8" event={"ID":"d3b21ef6-a5fa-43a8-86ed-ea0dc7312330","Type":"ContainerDied","Data":"d06634bf386f423cc32172d847711b6b54e995ec36275cf73cbf44d0b9b38e86"} Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.462377 4705 scope.go:117] "RemoveContainer" containerID="7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.462541 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff8r8" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.489210 4705 scope.go:117] "RemoveContainer" containerID="81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.499336 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff8r8"] Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.507465 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ff8r8"] Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.536653 4705 scope.go:117] "RemoveContainer" containerID="81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.561685 4705 scope.go:117] "RemoveContainer" containerID="7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1" Jan 24 08:08:27 crc kubenswrapper[4705]: E0124 08:08:27.562193 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1\": container with ID starting with 7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1 not found: ID does not exist" containerID="7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.562226 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1"} err="failed to get container status \"7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1\": rpc error: code = NotFound desc = could not find container \"7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1\": container with ID starting with 7470bc85a49182e0f1405e7358594a4418943b4c5127759d817cbcb5aea59fa1 not found: ID does not exist" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.562263 4705 scope.go:117] "RemoveContainer" containerID="81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee" Jan 24 08:08:27 crc kubenswrapper[4705]: E0124 08:08:27.562557 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee\": container with ID starting with 81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee not found: ID does not exist" containerID="81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.562579 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee"} err="failed to get container status \"81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee\": rpc error: code = NotFound desc = could not find container \"81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee\": container with ID starting with 81dd15c7f28585ca28df85a79c73af972023be3893385b477cdd1ddd5a028bee not found: ID does not exist" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.562592 4705 scope.go:117] "RemoveContainer" containerID="81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5" Jan 24 08:08:27 crc kubenswrapper[4705]: E0124 08:08:27.562876 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5\": container with ID starting with 81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5 not found: ID does not exist" containerID="81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.562924 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5"} err="failed to get container status \"81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5\": rpc error: code = NotFound desc = could not find container \"81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5\": container with ID starting with 81155fabc430137318ada488df47d69a94a321882bb4a5e90f6ec686610b06f5 not found: ID does not exist" Jan 24 08:08:27 crc kubenswrapper[4705]: I0124 08:08:27.588557 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" path="/var/lib/kubelet/pods/d3b21ef6-a5fa-43a8-86ed-ea0dc7312330/volumes" Jan 24 08:08:31 crc kubenswrapper[4705]: I0124 08:08:31.306785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:31 crc kubenswrapper[4705]: I0124 08:08:31.307182 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:31 crc kubenswrapper[4705]: I0124 08:08:31.350441 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:31 crc kubenswrapper[4705]: I0124 08:08:31.554983 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:32 crc kubenswrapper[4705]: I0124 08:08:32.075712 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2mws"] Jan 24 08:08:33 crc kubenswrapper[4705]: I0124 08:08:33.524456 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2mws" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="registry-server" containerID="cri-o://07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2" gracePeriod=2 Jan 24 08:08:33 crc kubenswrapper[4705]: E0124 08:08:33.673012 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47578e17_3544_4de9_8fe8_8fbd6b28a4b3.slice/crio-conmon-07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47578e17_3544_4de9_8fe8_8fbd6b28a4b3.slice/crio-07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2.scope\": RecentStats: unable to find data in memory cache]" Jan 24 08:08:33 crc kubenswrapper[4705]: I0124 08:08:33.966956 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.127959 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-utilities\") pod \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.128101 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lz4c\" (UniqueName: \"kubernetes.io/projected/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-kube-api-access-6lz4c\") pod \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.128160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-catalog-content\") pod \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\" (UID: \"47578e17-3544-4de9-8fe8-8fbd6b28a4b3\") " Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.128993 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-utilities" (OuterVolumeSpecName: "utilities") pod "47578e17-3544-4de9-8fe8-8fbd6b28a4b3" (UID: "47578e17-3544-4de9-8fe8-8fbd6b28a4b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.133513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-kube-api-access-6lz4c" (OuterVolumeSpecName: "kube-api-access-6lz4c") pod "47578e17-3544-4de9-8fe8-8fbd6b28a4b3" (UID: "47578e17-3544-4de9-8fe8-8fbd6b28a4b3"). InnerVolumeSpecName "kube-api-access-6lz4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.176042 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47578e17-3544-4de9-8fe8-8fbd6b28a4b3" (UID: "47578e17-3544-4de9-8fe8-8fbd6b28a4b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.230314 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.230361 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lz4c\" (UniqueName: \"kubernetes.io/projected/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-kube-api-access-6lz4c\") on node \"crc\" DevicePath \"\"" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.230376 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47578e17-3544-4de9-8fe8-8fbd6b28a4b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.558310 4705 generic.go:334] "Generic (PLEG): container finished" podID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerID="07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2" exitCode=0 Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.558362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2mws" event={"ID":"47578e17-3544-4de9-8fe8-8fbd6b28a4b3","Type":"ContainerDied","Data":"07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2"} Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.558980 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2mws" event={"ID":"47578e17-3544-4de9-8fe8-8fbd6b28a4b3","Type":"ContainerDied","Data":"3441b190ab39bc4103b04dfe28bb1830f170ef8ab44fdab1ba7fd2ec65677abe"} Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.559005 4705 scope.go:117] "RemoveContainer" containerID="07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.558401 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2mws" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.586675 4705 scope.go:117] "RemoveContainer" containerID="059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.604749 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2mws"] Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.611787 4705 scope.go:117] "RemoveContainer" containerID="4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.613145 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2mws"] Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.674301 4705 scope.go:117] "RemoveContainer" containerID="07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2" Jan 24 08:08:34 crc kubenswrapper[4705]: E0124 08:08:34.674677 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2\": container with ID starting with 07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2 not found: ID does not exist" containerID="07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.674727 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2"} err="failed to get container status \"07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2\": rpc error: code = NotFound desc = could not find container \"07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2\": container with ID starting with 07e7290446ef6b7ea3cdacccf2b2fb34f13bf63be8083b1d4ed25ef8d50c00e2 not found: ID does not exist" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.674751 4705 scope.go:117] "RemoveContainer" containerID="059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df" Jan 24 08:08:34 crc kubenswrapper[4705]: E0124 08:08:34.675086 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df\": container with ID starting with 059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df not found: ID does not exist" containerID="059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.675246 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df"} err="failed to get container status \"059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df\": rpc error: code = NotFound desc = could not find container \"059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df\": container with ID starting with 059a5fa7fe57cf8cb2bf8d9289cf10d40c8ca143bac9b1aea5726779481cc2df not found: ID does not exist" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.675361 4705 scope.go:117] "RemoveContainer" containerID="4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00" Jan 24 08:08:34 crc kubenswrapper[4705]: E0124 08:08:34.676033 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00\": container with ID starting with 4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00 not found: ID does not exist" containerID="4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00" Jan 24 08:08:34 crc kubenswrapper[4705]: I0124 08:08:34.676059 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00"} err="failed to get container status \"4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00\": rpc error: code = NotFound desc = could not find container \"4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00\": container with ID starting with 4d6bdc003adb920a80c8ddad6e9ed326d808df149b4c3d9d55f1fbf1259dab00 not found: ID does not exist" Jan 24 08:08:35 crc kubenswrapper[4705]: I0124 08:08:35.584997 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" path="/var/lib/kubelet/pods/47578e17-3544-4de9-8fe8-8fbd6b28a4b3/volumes" Jan 24 08:08:38 crc kubenswrapper[4705]: I0124 08:08:38.576628 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:08:38 crc kubenswrapper[4705]: E0124 08:08:38.578054 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:08:53 crc kubenswrapper[4705]: I0124 08:08:53.576084 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:08:53 crc kubenswrapper[4705]: E0124 08:08:53.577873 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:09:04 crc kubenswrapper[4705]: I0124 08:09:04.610510 4705 scope.go:117] "RemoveContainer" containerID="86bac3e0503430e17af877bf52998bf106f4485fb2afd86d46ab5c9cd8422b0c" Jan 24 08:09:04 crc kubenswrapper[4705]: I0124 08:09:04.637797 4705 scope.go:117] "RemoveContainer" containerID="f37bc30758c37c6ec67b6430424ec69f79a795fed0011d5abdad72050191cccb" Jan 24 08:09:04 crc kubenswrapper[4705]: I0124 08:09:04.669472 4705 scope.go:117] "RemoveContainer" containerID="967d1f93ca97dcec847f9e488ccdb8e4e79a97f2592827c2c7e38e20a8072c8c" Jan 24 08:09:04 crc kubenswrapper[4705]: I0124 08:09:04.710474 4705 scope.go:117] "RemoveContainer" containerID="dceb25f0b85589f81a6b2a64b8eec359431edf8775311097059fe4867ac163b8" Jan 24 08:09:08 crc kubenswrapper[4705]: I0124 08:09:08.631885 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:09:08 crc kubenswrapper[4705]: E0124 08:09:08.632860 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:09:21 crc kubenswrapper[4705]: I0124 08:09:21.582230 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:09:21 crc kubenswrapper[4705]: E0124 08:09:21.583104 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:09:34 crc kubenswrapper[4705]: I0124 08:09:34.575698 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:09:34 crc kubenswrapper[4705]: E0124 08:09:34.576468 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:09:49 crc kubenswrapper[4705]: I0124 08:09:49.575617 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:09:49 crc kubenswrapper[4705]: E0124 08:09:49.576252 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:10:02 crc kubenswrapper[4705]: I0124 08:10:02.575788 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:10:02 crc kubenswrapper[4705]: E0124 08:10:02.577552 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:10:08 crc kubenswrapper[4705]: I0124 08:10:08.477219 4705 generic.go:334] "Generic (PLEG): container finished" podID="67b8ef17-3a9a-4ebc-af02-eb475e2304af" containerID="c9ebbbc67c3d121634e3ed819da4587e120ce950d758f0528096057a6fef1207" exitCode=0 Jan 24 08:10:08 crc kubenswrapper[4705]: I0124 08:10:08.477408 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" event={"ID":"67b8ef17-3a9a-4ebc-af02-eb475e2304af","Type":"ContainerDied","Data":"c9ebbbc67c3d121634e3ed819da4587e120ce950d758f0528096057a6fef1207"} Jan 24 08:10:09 crc kubenswrapper[4705]: I0124 08:10:09.948833 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.051158 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-inventory\") pod \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.051216 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-ssh-key-openstack-edpm-ipam\") pod \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.051335 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xvcl\" (UniqueName: \"kubernetes.io/projected/67b8ef17-3a9a-4ebc-af02-eb475e2304af-kube-api-access-6xvcl\") pod \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.051489 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-bootstrap-combined-ca-bundle\") pod \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\" (UID: \"67b8ef17-3a9a-4ebc-af02-eb475e2304af\") " Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.056969 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b8ef17-3a9a-4ebc-af02-eb475e2304af-kube-api-access-6xvcl" (OuterVolumeSpecName: "kube-api-access-6xvcl") pod "67b8ef17-3a9a-4ebc-af02-eb475e2304af" (UID: "67b8ef17-3a9a-4ebc-af02-eb475e2304af"). InnerVolumeSpecName "kube-api-access-6xvcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.057301 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "67b8ef17-3a9a-4ebc-af02-eb475e2304af" (UID: "67b8ef17-3a9a-4ebc-af02-eb475e2304af"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.079329 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67b8ef17-3a9a-4ebc-af02-eb475e2304af" (UID: "67b8ef17-3a9a-4ebc-af02-eb475e2304af"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.085033 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-inventory" (OuterVolumeSpecName: "inventory") pod "67b8ef17-3a9a-4ebc-af02-eb475e2304af" (UID: "67b8ef17-3a9a-4ebc-af02-eb475e2304af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.154337 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.154383 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.154403 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xvcl\" (UniqueName: \"kubernetes.io/projected/67b8ef17-3a9a-4ebc-af02-eb475e2304af-kube-api-access-6xvcl\") on node \"crc\" DevicePath \"\"" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.154414 4705 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b8ef17-3a9a-4ebc-af02-eb475e2304af-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.500309 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" event={"ID":"67b8ef17-3a9a-4ebc-af02-eb475e2304af","Type":"ContainerDied","Data":"5491b4536b285b31ad0f9e7202d441f44494c76d3d5cce84923d0debcf6ef4d5"} Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.500602 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5491b4536b285b31ad0f9e7202d441f44494c76d3d5cce84923d0debcf6ef4d5" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.500353 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.598707 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d"] Jan 24 08:10:10 crc kubenswrapper[4705]: E0124 08:10:10.599332 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="extract-content" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599357 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="extract-content" Jan 24 08:10:10 crc kubenswrapper[4705]: E0124 08:10:10.599367 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="extract-content" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599376 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="extract-content" Jan 24 08:10:10 crc kubenswrapper[4705]: E0124 08:10:10.599386 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="extract-utilities" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599392 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="extract-utilities" Jan 24 08:10:10 crc kubenswrapper[4705]: E0124 08:10:10.599415 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b8ef17-3a9a-4ebc-af02-eb475e2304af" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599425 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b8ef17-3a9a-4ebc-af02-eb475e2304af" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 08:10:10 crc kubenswrapper[4705]: E0124 08:10:10.599444 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="registry-server" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599453 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="registry-server" Jan 24 08:10:10 crc kubenswrapper[4705]: E0124 08:10:10.599468 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="registry-server" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599476 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="registry-server" Jan 24 08:10:10 crc kubenswrapper[4705]: E0124 08:10:10.599493 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="extract-utilities" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599500 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="extract-utilities" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599686 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b21ef6-a5fa-43a8-86ed-ea0dc7312330" containerName="registry-server" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599700 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b8ef17-3a9a-4ebc-af02-eb475e2304af" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.599717 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="47578e17-3544-4de9-8fe8-8fbd6b28a4b3" containerName="registry-server" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.600320 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.608491 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.608661 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.609738 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.610059 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.613198 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d"] Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.663994 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phds\" (UniqueName: \"kubernetes.io/projected/f92be2f8-1ff3-4237-8046-ff1352af1bef-kube-api-access-7phds\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.664077 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.664143 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.765701 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7phds\" (UniqueName: \"kubernetes.io/projected/f92be2f8-1ff3-4237-8046-ff1352af1bef-kube-api-access-7phds\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.765799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.765889 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.769699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.769925 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.780952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7phds\" (UniqueName: \"kubernetes.io/projected/f92be2f8-1ff3-4237-8046-ff1352af1bef-kube-api-access-7phds\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:10 crc kubenswrapper[4705]: I0124 08:10:10.919310 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:10:11 crc kubenswrapper[4705]: I0124 08:10:11.449889 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d"] Jan 24 08:10:11 crc kubenswrapper[4705]: I0124 08:10:11.457629 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:10:11 crc kubenswrapper[4705]: I0124 08:10:11.509379 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" event={"ID":"f92be2f8-1ff3-4237-8046-ff1352af1bef","Type":"ContainerStarted","Data":"118e9e07f2c7cc730e6728c52e3b5bd5e3f72a2a8a100721f408d65b3207df91"} Jan 24 08:10:12 crc kubenswrapper[4705]: I0124 08:10:12.520635 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" event={"ID":"f92be2f8-1ff3-4237-8046-ff1352af1bef","Type":"ContainerStarted","Data":"37eb79070af9c726bb621b831c54b15c914a0e85728a1b6850f0ca35f4f06e17"} Jan 24 08:10:12 crc kubenswrapper[4705]: I0124 08:10:12.537580 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" podStartSLOduration=2.085313497 podStartE2EDuration="2.537561212s" podCreationTimestamp="2026-01-24 08:10:10 +0000 UTC" firstStartedPulling="2026-01-24 08:10:11.457427348 +0000 UTC m=+1750.177300636" lastFinishedPulling="2026-01-24 08:10:11.909675073 +0000 UTC m=+1750.629548351" observedRunningTime="2026-01-24 08:10:12.5367584 +0000 UTC m=+1751.256631688" watchObservedRunningTime="2026-01-24 08:10:12.537561212 +0000 UTC m=+1751.257434500" Jan 24 08:10:16 crc kubenswrapper[4705]: I0124 08:10:16.575711 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:10:16 crc kubenswrapper[4705]: E0124 08:10:16.576554 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.152744 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-z5khx"] Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.161772 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-j2ld4"] Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.170639 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1362-account-create-update-zfxhz"] Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.181651 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4f5d-account-create-update-897h6"] Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.189987 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4f5d-account-create-update-897h6"] Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.199578 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1362-account-create-update-zfxhz"] Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.209432 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-j2ld4"] Jan 24 08:10:18 crc kubenswrapper[4705]: I0124 08:10:18.217589 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-z5khx"] Jan 24 08:10:19 crc kubenswrapper[4705]: I0124 08:10:19.590895 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2084db85-248b-4371-86f6-4ff38216e099" path="/var/lib/kubelet/pods/2084db85-248b-4371-86f6-4ff38216e099/volumes" Jan 24 08:10:19 crc kubenswrapper[4705]: I0124 08:10:19.593805 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874c88aa-a889-442c-92ea-1bdfe2a23761" path="/var/lib/kubelet/pods/874c88aa-a889-442c-92ea-1bdfe2a23761/volumes" Jan 24 08:10:19 crc kubenswrapper[4705]: I0124 08:10:19.594538 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3a2337-3d3f-4892-a825-5e88cc3cf834" path="/var/lib/kubelet/pods/af3a2337-3d3f-4892-a825-5e88cc3cf834/volumes" Jan 24 08:10:19 crc kubenswrapper[4705]: I0124 08:10:19.595268 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e9f514-b21b-430c-a352-124dbb196d6d" path="/var/lib/kubelet/pods/e9e9f514-b21b-430c-a352-124dbb196d6d/volumes" Jan 24 08:10:21 crc kubenswrapper[4705]: I0124 08:10:21.029873 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-dfc3-account-create-update-z59kd"] Jan 24 08:10:21 crc kubenswrapper[4705]: I0124 08:10:21.039798 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-j6cfn"] Jan 24 08:10:21 crc kubenswrapper[4705]: I0124 08:10:21.049402 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-dfc3-account-create-update-z59kd"] Jan 24 08:10:21 crc kubenswrapper[4705]: I0124 08:10:21.058307 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-j6cfn"] Jan 24 08:10:21 crc kubenswrapper[4705]: I0124 08:10:21.598455 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="474fa99b-e87b-40e0-9de8-7bffc3b57abe" path="/var/lib/kubelet/pods/474fa99b-e87b-40e0-9de8-7bffc3b57abe/volumes" Jan 24 08:10:21 crc kubenswrapper[4705]: I0124 08:10:21.600325 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5062aa1f-60d7-499c-9751-eb326a788033" path="/var/lib/kubelet/pods/5062aa1f-60d7-499c-9751-eb326a788033/volumes" Jan 24 08:10:29 crc kubenswrapper[4705]: I0124 08:10:29.576909 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:10:29 crc kubenswrapper[4705]: E0124 08:10:29.578051 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:10:42 crc kubenswrapper[4705]: I0124 08:10:42.576617 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:10:42 crc kubenswrapper[4705]: E0124 08:10:42.577399 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:10:47 crc kubenswrapper[4705]: I0124 08:10:47.055573 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qrbfv"] Jan 24 08:10:47 crc kubenswrapper[4705]: I0124 08:10:47.071504 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qrbfv"] Jan 24 08:10:47 crc kubenswrapper[4705]: I0124 08:10:47.588964 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7afb13-2fb5-4520-acfc-d52cb558cd6c" path="/var/lib/kubelet/pods/8e7afb13-2fb5-4520-acfc-d52cb558cd6c/volumes" Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.039040 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-fsqnk"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.051548 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-eb11-account-create-update-8g2gk"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.066395 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-fsqnk"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.079659 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-eb11-account-create-update-8g2gk"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.092153 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xx2zn"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.103558 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lmg2v"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.113477 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3bed-account-create-update-ffpcb"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.121415 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lmg2v"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.129370 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xx2zn"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.137954 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3bed-account-create-update-ffpcb"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.145527 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4skzp"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.152917 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4skzp"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.161163 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b6a9-account-create-update-vlfsr"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.168406 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b6a9-account-create-update-vlfsr"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.175504 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0742-account-create-update-sh6tr"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.183007 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0742-account-create-update-sh6tr"] Jan 24 08:10:54 crc kubenswrapper[4705]: I0124 08:10:54.576993 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:10:54 crc kubenswrapper[4705]: E0124 08:10:54.577601 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.587040 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c9a82f-9177-4a55-8059-6a498bbf927d" path="/var/lib/kubelet/pods/05c9a82f-9177-4a55-8059-6a498bbf927d/volumes" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.588012 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0931e94d-dcf4-447e-bbda-071ae0b176ec" path="/var/lib/kubelet/pods/0931e94d-dcf4-447e-bbda-071ae0b176ec/volumes" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.588552 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b7cacf-32a2-48d0-af65-162d7b360d89" path="/var/lib/kubelet/pods/28b7cacf-32a2-48d0-af65-162d7b360d89/volumes" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.589221 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ed0253-206d-47a8-bab2-31b8fc8cd69f" path="/var/lib/kubelet/pods/60ed0253-206d-47a8-bab2-31b8fc8cd69f/volumes" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.590376 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6564fea8-f9ad-47ed-90fa-1d08616f0b60" path="/var/lib/kubelet/pods/6564fea8-f9ad-47ed-90fa-1d08616f0b60/volumes" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.590932 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879e9f18-d823-4818-b1f5-4c0d3da3afb7" path="/var/lib/kubelet/pods/879e9f18-d823-4818-b1f5-4c0d3da3afb7/volumes" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.591611 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d096e4-c16f-4958-a1ea-d38cdecf18da" path="/var/lib/kubelet/pods/f5d096e4-c16f-4958-a1ea-d38cdecf18da/volumes" Jan 24 08:10:55 crc kubenswrapper[4705]: I0124 08:10:55.592679 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f637d189-c592-4ff2-96a3-8b001688a84f" path="/var/lib/kubelet/pods/f637d189-c592-4ff2-96a3-8b001688a84f/volumes" Jan 24 08:11:00 crc kubenswrapper[4705]: I0124 08:11:00.044682 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5q8ws"] Jan 24 08:11:00 crc kubenswrapper[4705]: I0124 08:11:00.054135 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5q8ws"] Jan 24 08:11:01 crc kubenswrapper[4705]: I0124 08:11:01.629543 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05482d0-25f0-4382-bfdb-ff3053e44366" path="/var/lib/kubelet/pods/a05482d0-25f0-4382-bfdb-ff3053e44366/volumes" Jan 24 08:11:04 crc kubenswrapper[4705]: I0124 08:11:04.852216 4705 scope.go:117] "RemoveContainer" containerID="e504850f8af85801945fb4884114e302e53f532dd2ad546f50c60eac11b78dd6" Jan 24 08:11:04 crc kubenswrapper[4705]: I0124 08:11:04.876153 4705 scope.go:117] "RemoveContainer" containerID="12e6408405394c890f396ca13b08fc9d4550c57b6792790874de3b67b6a062b5" Jan 24 08:11:04 crc kubenswrapper[4705]: I0124 08:11:04.945187 4705 scope.go:117] "RemoveContainer" containerID="bbb57f5549c222454dec74a7e21afda9954e91eac4862ab766fbe53075156914" Jan 24 08:11:04 crc kubenswrapper[4705]: I0124 08:11:04.984337 4705 scope.go:117] "RemoveContainer" containerID="21c402e96d6f5100b9d088fc00ee183be39dea415265269ddd4b52e3b4e69f7d" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.030143 4705 scope.go:117] "RemoveContainer" containerID="e59acc1d99628313da7f48a7709ab1201cbdd0b675e5c3d6cc074bff4ef2a728" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.071571 4705 scope.go:117] "RemoveContainer" containerID="09d6783745d018637f9490bb61448483934129470d1ddb96b103992836e47078" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.110473 4705 scope.go:117] "RemoveContainer" containerID="8afc5f9d1ff323d3f4539a1a9882c6116af010c40b76213a8c26d7211ac432eb" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.128756 4705 scope.go:117] "RemoveContainer" containerID="af7f126f9c2f951196fe112213bc9a8e3a98c84d02caa118ecd7a55175838338" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.151221 4705 scope.go:117] "RemoveContainer" containerID="7d9b1a058f2c8c327b13d3f8b8320f63e29c4eef91a6967af29b732ff2aefdf5" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.169694 4705 scope.go:117] "RemoveContainer" containerID="38bde5cf69a417a30b55d89ca52e211f599c4d17b3bfccc698d7b8840997a4f4" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.188918 4705 scope.go:117] "RemoveContainer" containerID="58d6e93cdbd28215cee24b92af0f059cd38c8b92f55bd03048eded63f361eaba" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.209978 4705 scope.go:117] "RemoveContainer" containerID="ef74a14b98b649a23a658ea9995acca22723396d3eafd55eecb16f728f04d4b2" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.249259 4705 scope.go:117] "RemoveContainer" containerID="3ff88df58b421fefe62256636e9e8d3049c7debc047730a79e69083289be5b51" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.272002 4705 scope.go:117] "RemoveContainer" containerID="5502486742c0464560f4b4dbdfc37e05af383f8cd794b0b3f78fee8b2545d32a" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.292731 4705 scope.go:117] "RemoveContainer" containerID="f5eb79a7e1ec5e8375286109f00486d904c8076d80c27babf32f01d8b8b435f9" Jan 24 08:11:05 crc kubenswrapper[4705]: I0124 08:11:05.316531 4705 scope.go:117] "RemoveContainer" containerID="25d18c478c1e9b79b6b4106c2375879eeb69eea8cb581bb3a51f6627d776fa8d" Jan 24 08:11:08 crc kubenswrapper[4705]: I0124 08:11:08.576340 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:11:08 crc kubenswrapper[4705]: E0124 08:11:08.577015 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:11:22 crc kubenswrapper[4705]: I0124 08:11:22.575707 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:11:22 crc kubenswrapper[4705]: E0124 08:11:22.576602 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:11:34 crc kubenswrapper[4705]: I0124 08:11:34.576354 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:11:34 crc kubenswrapper[4705]: E0124 08:11:34.577106 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:11:38 crc kubenswrapper[4705]: I0124 08:11:38.044354 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zlxm7"] Jan 24 08:11:38 crc kubenswrapper[4705]: I0124 08:11:38.051930 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zlxm7"] Jan 24 08:11:39 crc kubenswrapper[4705]: I0124 08:11:39.588387 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40a2abc-33c9-4284-af71-03fc828b92d2" path="/var/lib/kubelet/pods/d40a2abc-33c9-4284-af71-03fc828b92d2/volumes" Jan 24 08:11:43 crc kubenswrapper[4705]: I0124 08:11:43.028625 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-x9rjl"] Jan 24 08:11:43 crc kubenswrapper[4705]: I0124 08:11:43.037533 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-x9rjl"] Jan 24 08:11:43 crc kubenswrapper[4705]: I0124 08:11:43.588530 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0eee50a-21e0-4948-9afa-b552d6173e3b" path="/var/lib/kubelet/pods/e0eee50a-21e0-4948-9afa-b552d6173e3b/volumes" Jan 24 08:11:45 crc kubenswrapper[4705]: I0124 08:11:45.576158 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:11:45 crc kubenswrapper[4705]: E0124 08:11:45.576838 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:11:46 crc kubenswrapper[4705]: I0124 08:11:46.028121 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-pvrgf"] Jan 24 08:11:46 crc kubenswrapper[4705]: I0124 08:11:46.037585 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-pvrgf"] Jan 24 08:11:47 crc kubenswrapper[4705]: I0124 08:11:47.588755 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb2004e-d936-4fb9-929b-b949158ac9b8" path="/var/lib/kubelet/pods/6eb2004e-d936-4fb9-929b-b949158ac9b8/volumes" Jan 24 08:11:48 crc kubenswrapper[4705]: I0124 08:11:48.030394 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jrq4n"] Jan 24 08:11:48 crc kubenswrapper[4705]: I0124 08:11:48.039167 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jrq4n"] Jan 24 08:11:49 crc kubenswrapper[4705]: I0124 08:11:49.588411 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9" path="/var/lib/kubelet/pods/bb5c9c32-f7bb-4ee0-8eb4-b5504ea0f8e9/volumes" Jan 24 08:11:50 crc kubenswrapper[4705]: I0124 08:11:50.029661 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f8ghj"] Jan 24 08:11:50 crc kubenswrapper[4705]: I0124 08:11:50.038265 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f8ghj"] Jan 24 08:11:51 crc kubenswrapper[4705]: I0124 08:11:51.585532 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a444b75-4995-40f9-8432-b62814685b02" path="/var/lib/kubelet/pods/5a444b75-4995-40f9-8432-b62814685b02/volumes" Jan 24 08:11:54 crc kubenswrapper[4705]: I0124 08:11:54.136290 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-tdcf8"] Jan 24 08:11:54 crc kubenswrapper[4705]: I0124 08:11:54.146721 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-tdcf8"] Jan 24 08:11:55 crc kubenswrapper[4705]: I0124 08:11:55.586140 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66c9c6e-cac9-4e99-b4d7-532f87f30ada" path="/var/lib/kubelet/pods/e66c9c6e-cac9-4e99-b4d7-532f87f30ada/volumes" Jan 24 08:11:59 crc kubenswrapper[4705]: I0124 08:11:59.576073 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:11:59 crc kubenswrapper[4705]: E0124 08:11:59.576607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:12:05 crc kubenswrapper[4705]: I0124 08:12:05.634069 4705 scope.go:117] "RemoveContainer" containerID="fd943b09146631048436c719415acd1c7f286ac6c01c5fa0c29fffcb6ba8dcd0" Jan 24 08:12:05 crc kubenswrapper[4705]: I0124 08:12:05.687288 4705 scope.go:117] "RemoveContainer" containerID="61561a32e954dfb4dce0a870ed165eece13279f26a9aec75ff9695cbacfca56a" Jan 24 08:12:05 crc kubenswrapper[4705]: I0124 08:12:05.729809 4705 scope.go:117] "RemoveContainer" containerID="059112f9c3818b8fe52b2126ef42893a6b452be271822904bf04a1268c4cb111" Jan 24 08:12:05 crc kubenswrapper[4705]: I0124 08:12:05.759473 4705 scope.go:117] "RemoveContainer" containerID="d71d7328d5b349c8d9015fd098f29ffe7922d1d706f8fdd6285f156e63a8b4be" Jan 24 08:12:05 crc kubenswrapper[4705]: I0124 08:12:05.812441 4705 scope.go:117] "RemoveContainer" containerID="af1891f6e9ec6e86a3f010a149959b79147b36c8417b8953f50a9e9358d38436" Jan 24 08:12:05 crc kubenswrapper[4705]: I0124 08:12:05.853433 4705 scope.go:117] "RemoveContainer" containerID="0777ce2c4bf566e52505eac9641d5f15c457353ae9c295220ba2953dca4c36de" Jan 24 08:12:06 crc kubenswrapper[4705]: I0124 08:12:06.466578 4705 generic.go:334] "Generic (PLEG): container finished" podID="f92be2f8-1ff3-4237-8046-ff1352af1bef" containerID="37eb79070af9c726bb621b831c54b15c914a0e85728a1b6850f0ca35f4f06e17" exitCode=0 Jan 24 08:12:06 crc kubenswrapper[4705]: I0124 08:12:06.466628 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" event={"ID":"f92be2f8-1ff3-4237-8046-ff1352af1bef","Type":"ContainerDied","Data":"37eb79070af9c726bb621b831c54b15c914a0e85728a1b6850f0ca35f4f06e17"} Jan 24 08:12:07 crc kubenswrapper[4705]: I0124 08:12:07.892765 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.271419 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-inventory\") pod \"f92be2f8-1ff3-4237-8046-ff1352af1bef\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.271516 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7phds\" (UniqueName: \"kubernetes.io/projected/f92be2f8-1ff3-4237-8046-ff1352af1bef-kube-api-access-7phds\") pod \"f92be2f8-1ff3-4237-8046-ff1352af1bef\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.271747 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-ssh-key-openstack-edpm-ipam\") pod \"f92be2f8-1ff3-4237-8046-ff1352af1bef\" (UID: \"f92be2f8-1ff3-4237-8046-ff1352af1bef\") " Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.278341 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92be2f8-1ff3-4237-8046-ff1352af1bef-kube-api-access-7phds" (OuterVolumeSpecName: "kube-api-access-7phds") pod "f92be2f8-1ff3-4237-8046-ff1352af1bef" (UID: "f92be2f8-1ff3-4237-8046-ff1352af1bef"). InnerVolumeSpecName "kube-api-access-7phds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.301178 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f92be2f8-1ff3-4237-8046-ff1352af1bef" (UID: "f92be2f8-1ff3-4237-8046-ff1352af1bef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.304125 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-inventory" (OuterVolumeSpecName: "inventory") pod "f92be2f8-1ff3-4237-8046-ff1352af1bef" (UID: "f92be2f8-1ff3-4237-8046-ff1352af1bef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.374246 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.374298 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92be2f8-1ff3-4237-8046-ff1352af1bef-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.374310 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7phds\" (UniqueName: \"kubernetes.io/projected/f92be2f8-1ff3-4237-8046-ff1352af1bef-kube-api-access-7phds\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.486201 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" event={"ID":"f92be2f8-1ff3-4237-8046-ff1352af1bef","Type":"ContainerDied","Data":"118e9e07f2c7cc730e6728c52e3b5bd5e3f72a2a8a100721f408d65b3207df91"} Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.486273 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="118e9e07f2c7cc730e6728c52e3b5bd5e3f72a2a8a100721f408d65b3207df91" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.486341 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.570979 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj"] Jan 24 08:12:08 crc kubenswrapper[4705]: E0124 08:12:08.571490 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92be2f8-1ff3-4237-8046-ff1352af1bef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.571510 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92be2f8-1ff3-4237-8046-ff1352af1bef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.571741 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92be2f8-1ff3-4237-8046-ff1352af1bef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.572625 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.575052 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.575249 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.575380 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.575410 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.586163 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj"] Jan 24 08:12:08 crc kubenswrapper[4705]: E0124 08:12:08.649146 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92be2f8_1ff3_4237_8046_ff1352af1bef.slice/crio-118e9e07f2c7cc730e6728c52e3b5bd5e3f72a2a8a100721f408d65b3207df91\": RecentStats: unable to find data in memory cache]" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.680298 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.680939 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54gnz\" (UniqueName: \"kubernetes.io/projected/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-kube-api-access-54gnz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.681256 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.783381 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54gnz\" (UniqueName: \"kubernetes.io/projected/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-kube-api-access-54gnz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.783662 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.783705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.788620 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.791249 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.799426 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54gnz\" (UniqueName: \"kubernetes.io/projected/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-kube-api-access-54gnz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:08 crc kubenswrapper[4705]: I0124 08:12:08.892679 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:12:09 crc kubenswrapper[4705]: I0124 08:12:09.408351 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj"] Jan 24 08:12:09 crc kubenswrapper[4705]: W0124 08:12:09.414074 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0eb2c_3a7c_4b3d_9cb7_b2c46beec516.slice/crio-d7c0461c154d35703d43b974a04d5f0148e90c0d3f39a6a11a0e44b1ee95ac3c WatchSource:0}: Error finding container d7c0461c154d35703d43b974a04d5f0148e90c0d3f39a6a11a0e44b1ee95ac3c: Status 404 returned error can't find the container with id d7c0461c154d35703d43b974a04d5f0148e90c0d3f39a6a11a0e44b1ee95ac3c Jan 24 08:12:09 crc kubenswrapper[4705]: I0124 08:12:09.495027 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" event={"ID":"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516","Type":"ContainerStarted","Data":"d7c0461c154d35703d43b974a04d5f0148e90c0d3f39a6a11a0e44b1ee95ac3c"} Jan 24 08:12:10 crc kubenswrapper[4705]: I0124 08:12:10.039978 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-lvscj"] Jan 24 08:12:10 crc kubenswrapper[4705]: I0124 08:12:10.057467 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-lvscj"] Jan 24 08:12:10 crc kubenswrapper[4705]: I0124 08:12:10.504157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" event={"ID":"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516","Type":"ContainerStarted","Data":"1428167278c95ae8cc948eae2ce50b156a658ad74fa743ce78929e893212c9ca"} Jan 24 08:12:10 crc kubenswrapper[4705]: I0124 08:12:10.534101 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" podStartSLOduration=1.9127217 podStartE2EDuration="2.534065098s" podCreationTimestamp="2026-01-24 08:12:08 +0000 UTC" firstStartedPulling="2026-01-24 08:12:09.41737531 +0000 UTC m=+1868.137248588" lastFinishedPulling="2026-01-24 08:12:10.038718688 +0000 UTC m=+1868.758591986" observedRunningTime="2026-01-24 08:12:10.516756775 +0000 UTC m=+1869.236630063" watchObservedRunningTime="2026-01-24 08:12:10.534065098 +0000 UTC m=+1869.253938386" Jan 24 08:12:11 crc kubenswrapper[4705]: I0124 08:12:11.583781 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:12:11 crc kubenswrapper[4705]: E0124 08:12:11.584344 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:12:11 crc kubenswrapper[4705]: I0124 08:12:11.588953 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd93ff70-0f51-4af8-9a10-6407f4901667" path="/var/lib/kubelet/pods/fd93ff70-0f51-4af8-9a10-6407f4901667/volumes" Jan 24 08:12:24 crc kubenswrapper[4705]: I0124 08:12:24.576079 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:12:24 crc kubenswrapper[4705]: E0124 08:12:24.577010 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.385936 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpng"] Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.391192 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.397215 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpng"] Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.436979 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-catalog-content\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.437196 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhpk\" (UniqueName: \"kubernetes.io/projected/2bde3532-50d9-4b58-bdbb-912b1fe4d251-kube-api-access-rnhpk\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.437426 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-utilities\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.539895 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhpk\" (UniqueName: \"kubernetes.io/projected/2bde3532-50d9-4b58-bdbb-912b1fe4d251-kube-api-access-rnhpk\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.540025 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-utilities\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.540054 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-catalog-content\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.540522 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-catalog-content\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.540638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-utilities\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.560293 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhpk\" (UniqueName: \"kubernetes.io/projected/2bde3532-50d9-4b58-bdbb-912b1fe4d251-kube-api-access-rnhpk\") pod \"redhat-marketplace-dvpng\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.795806 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.834630 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6htlg"] Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.839247 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.848450 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6htlg"] Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.900709 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8nc\" (UniqueName: \"kubernetes.io/projected/3222cd1a-e773-46a3-94bf-cd280f3c2b27-kube-api-access-nx8nc\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.900813 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-catalog-content\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:28 crc kubenswrapper[4705]: I0124 08:12:28.900947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-utilities\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.003034 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-utilities\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.003643 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-utilities\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.013557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8nc\" (UniqueName: \"kubernetes.io/projected/3222cd1a-e773-46a3-94bf-cd280f3c2b27-kube-api-access-nx8nc\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.013754 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-catalog-content\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.014373 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-catalog-content\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.047057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8nc\" (UniqueName: \"kubernetes.io/projected/3222cd1a-e773-46a3-94bf-cd280f3c2b27-kube-api-access-nx8nc\") pod \"redhat-operators-6htlg\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.281382 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.339494 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpng"] Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.759739 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6htlg"] Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.847193 4705 generic.go:334] "Generic (PLEG): container finished" podID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerID="b503248b456d5c59fe76ed46e8cb5d7c0bd8e3b85e21221d915cf684578550d5" exitCode=0 Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.847272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpng" event={"ID":"2bde3532-50d9-4b58-bdbb-912b1fe4d251","Type":"ContainerDied","Data":"b503248b456d5c59fe76ed46e8cb5d7c0bd8e3b85e21221d915cf684578550d5"} Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.847302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpng" event={"ID":"2bde3532-50d9-4b58-bdbb-912b1fe4d251","Type":"ContainerStarted","Data":"0b008532ca59fb26968923bed77ea8a4e76f604d3b8e2836daa8d0b237cc98dd"} Jan 24 08:12:29 crc kubenswrapper[4705]: I0124 08:12:29.848382 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6htlg" event={"ID":"3222cd1a-e773-46a3-94bf-cd280f3c2b27","Type":"ContainerStarted","Data":"e7b37576d0f95b714e79c81d70ae19feca34629fab0e53f5612684402ab9371f"} Jan 24 08:12:30 crc kubenswrapper[4705]: I0124 08:12:30.864799 4705 generic.go:334] "Generic (PLEG): container finished" podID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerID="97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077" exitCode=0 Jan 24 08:12:30 crc kubenswrapper[4705]: I0124 08:12:30.864900 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6htlg" event={"ID":"3222cd1a-e773-46a3-94bf-cd280f3c2b27","Type":"ContainerDied","Data":"97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077"} Jan 24 08:12:30 crc kubenswrapper[4705]: I0124 08:12:30.870006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpng" event={"ID":"2bde3532-50d9-4b58-bdbb-912b1fe4d251","Type":"ContainerStarted","Data":"a4aa9210be6883317539b7e728764f47e0249c96c4c81b7c9576b6f303b44350"} Jan 24 08:12:31 crc kubenswrapper[4705]: I0124 08:12:31.883136 4705 generic.go:334] "Generic (PLEG): container finished" podID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerID="a4aa9210be6883317539b7e728764f47e0249c96c4c81b7c9576b6f303b44350" exitCode=0 Jan 24 08:12:31 crc kubenswrapper[4705]: I0124 08:12:31.884632 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpng" event={"ID":"2bde3532-50d9-4b58-bdbb-912b1fe4d251","Type":"ContainerDied","Data":"a4aa9210be6883317539b7e728764f47e0249c96c4c81b7c9576b6f303b44350"} Jan 24 08:12:31 crc kubenswrapper[4705]: I0124 08:12:31.890260 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6htlg" event={"ID":"3222cd1a-e773-46a3-94bf-cd280f3c2b27","Type":"ContainerStarted","Data":"efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb"} Jan 24 08:12:32 crc kubenswrapper[4705]: I0124 08:12:32.899272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpng" event={"ID":"2bde3532-50d9-4b58-bdbb-912b1fe4d251","Type":"ContainerStarted","Data":"1a3dc63f12920b21c119a5ebf9de0695020667e78d744545d5a01fef72fa51ce"} Jan 24 08:12:32 crc kubenswrapper[4705]: I0124 08:12:32.921435 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dvpng" podStartSLOduration=2.484276866 podStartE2EDuration="4.92141599s" podCreationTimestamp="2026-01-24 08:12:28 +0000 UTC" firstStartedPulling="2026-01-24 08:12:29.849194344 +0000 UTC m=+1888.569067632" lastFinishedPulling="2026-01-24 08:12:32.286333468 +0000 UTC m=+1891.006206756" observedRunningTime="2026-01-24 08:12:32.921338587 +0000 UTC m=+1891.641211875" watchObservedRunningTime="2026-01-24 08:12:32.92141599 +0000 UTC m=+1891.641289278" Jan 24 08:12:33 crc kubenswrapper[4705]: I0124 08:12:33.909801 4705 generic.go:334] "Generic (PLEG): container finished" podID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerID="efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb" exitCode=0 Jan 24 08:12:33 crc kubenswrapper[4705]: I0124 08:12:33.909869 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6htlg" event={"ID":"3222cd1a-e773-46a3-94bf-cd280f3c2b27","Type":"ContainerDied","Data":"efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb"} Jan 24 08:12:34 crc kubenswrapper[4705]: I0124 08:12:34.932453 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6htlg" event={"ID":"3222cd1a-e773-46a3-94bf-cd280f3c2b27","Type":"ContainerStarted","Data":"9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90"} Jan 24 08:12:34 crc kubenswrapper[4705]: I0124 08:12:34.958214 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6htlg" podStartSLOduration=3.280336502 podStartE2EDuration="6.958199407s" podCreationTimestamp="2026-01-24 08:12:28 +0000 UTC" firstStartedPulling="2026-01-24 08:12:30.866031484 +0000 UTC m=+1889.585904772" lastFinishedPulling="2026-01-24 08:12:34.543894399 +0000 UTC m=+1893.263767677" observedRunningTime="2026-01-24 08:12:34.957540348 +0000 UTC m=+1893.677413636" watchObservedRunningTime="2026-01-24 08:12:34.958199407 +0000 UTC m=+1893.678072695" Jan 24 08:12:35 crc kubenswrapper[4705]: I0124 08:12:35.575173 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:12:35 crc kubenswrapper[4705]: E0124 08:12:35.575442 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:12:38 crc kubenswrapper[4705]: I0124 08:12:38.796969 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:38 crc kubenswrapper[4705]: I0124 08:12:38.797498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:38 crc kubenswrapper[4705]: I0124 08:12:38.848600 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:39 crc kubenswrapper[4705]: I0124 08:12:39.008228 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:39 crc kubenswrapper[4705]: I0124 08:12:39.282276 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:39 crc kubenswrapper[4705]: I0124 08:12:39.282348 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:40 crc kubenswrapper[4705]: I0124 08:12:40.332043 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6htlg" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="registry-server" probeResult="failure" output=< Jan 24 08:12:40 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Jan 24 08:12:40 crc kubenswrapper[4705]: > Jan 24 08:12:41 crc kubenswrapper[4705]: I0124 08:12:41.372268 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpng"] Jan 24 08:12:41 crc kubenswrapper[4705]: I0124 08:12:41.373265 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dvpng" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="registry-server" containerID="cri-o://1a3dc63f12920b21c119a5ebf9de0695020667e78d744545d5a01fef72fa51ce" gracePeriod=2 Jan 24 08:12:41 crc kubenswrapper[4705]: I0124 08:12:41.992707 4705 generic.go:334] "Generic (PLEG): container finished" podID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerID="1a3dc63f12920b21c119a5ebf9de0695020667e78d744545d5a01fef72fa51ce" exitCode=0 Jan 24 08:12:41 crc kubenswrapper[4705]: I0124 08:12:41.992791 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpng" event={"ID":"2bde3532-50d9-4b58-bdbb-912b1fe4d251","Type":"ContainerDied","Data":"1a3dc63f12920b21c119a5ebf9de0695020667e78d744545d5a01fef72fa51ce"} Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.394220 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.521277 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-catalog-content\") pod \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.521333 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnhpk\" (UniqueName: \"kubernetes.io/projected/2bde3532-50d9-4b58-bdbb-912b1fe4d251-kube-api-access-rnhpk\") pod \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.521404 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-utilities\") pod \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\" (UID: \"2bde3532-50d9-4b58-bdbb-912b1fe4d251\") " Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.522467 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-utilities" (OuterVolumeSpecName: "utilities") pod "2bde3532-50d9-4b58-bdbb-912b1fe4d251" (UID: "2bde3532-50d9-4b58-bdbb-912b1fe4d251"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.527633 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bde3532-50d9-4b58-bdbb-912b1fe4d251-kube-api-access-rnhpk" (OuterVolumeSpecName: "kube-api-access-rnhpk") pod "2bde3532-50d9-4b58-bdbb-912b1fe4d251" (UID: "2bde3532-50d9-4b58-bdbb-912b1fe4d251"). InnerVolumeSpecName "kube-api-access-rnhpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.543918 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bde3532-50d9-4b58-bdbb-912b1fe4d251" (UID: "2bde3532-50d9-4b58-bdbb-912b1fe4d251"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.624304 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.624344 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bde3532-50d9-4b58-bdbb-912b1fe4d251-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:42 crc kubenswrapper[4705]: I0124 08:12:42.624356 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnhpk\" (UniqueName: \"kubernetes.io/projected/2bde3532-50d9-4b58-bdbb-912b1fe4d251-kube-api-access-rnhpk\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.005072 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpng" event={"ID":"2bde3532-50d9-4b58-bdbb-912b1fe4d251","Type":"ContainerDied","Data":"0b008532ca59fb26968923bed77ea8a4e76f604d3b8e2836daa8d0b237cc98dd"} Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.005130 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpng" Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.005132 4705 scope.go:117] "RemoveContainer" containerID="1a3dc63f12920b21c119a5ebf9de0695020667e78d744545d5a01fef72fa51ce" Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.035903 4705 scope.go:117] "RemoveContainer" containerID="a4aa9210be6883317539b7e728764f47e0249c96c4c81b7c9576b6f303b44350" Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.043349 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpng"] Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.066308 4705 scope.go:117] "RemoveContainer" containerID="b503248b456d5c59fe76ed46e8cb5d7c0bd8e3b85e21221d915cf684578550d5" Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.080662 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpng"] Jan 24 08:12:43 crc kubenswrapper[4705]: I0124 08:12:43.589129 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" path="/var/lib/kubelet/pods/2bde3532-50d9-4b58-bdbb-912b1fe4d251/volumes" Jan 24 08:12:46 crc kubenswrapper[4705]: I0124 08:12:46.575733 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:12:46 crc kubenswrapper[4705]: E0124 08:12:46.576387 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:12:49 crc kubenswrapper[4705]: I0124 08:12:49.327728 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:49 crc kubenswrapper[4705]: I0124 08:12:49.385650 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:49 crc kubenswrapper[4705]: I0124 08:12:49.562488 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6htlg"] Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.068261 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6htlg" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="registry-server" containerID="cri-o://9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90" gracePeriod=2 Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.518160 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.596837 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-utilities\") pod \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.596897 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-catalog-content\") pod \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.597010 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx8nc\" (UniqueName: \"kubernetes.io/projected/3222cd1a-e773-46a3-94bf-cd280f3c2b27-kube-api-access-nx8nc\") pod \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\" (UID: \"3222cd1a-e773-46a3-94bf-cd280f3c2b27\") " Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.597694 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-utilities" (OuterVolumeSpecName: "utilities") pod "3222cd1a-e773-46a3-94bf-cd280f3c2b27" (UID: "3222cd1a-e773-46a3-94bf-cd280f3c2b27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.602289 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3222cd1a-e773-46a3-94bf-cd280f3c2b27-kube-api-access-nx8nc" (OuterVolumeSpecName: "kube-api-access-nx8nc") pod "3222cd1a-e773-46a3-94bf-cd280f3c2b27" (UID: "3222cd1a-e773-46a3-94bf-cd280f3c2b27"). InnerVolumeSpecName "kube-api-access-nx8nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.698786 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx8nc\" (UniqueName: \"kubernetes.io/projected/3222cd1a-e773-46a3-94bf-cd280f3c2b27-kube-api-access-nx8nc\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.698836 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.721276 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3222cd1a-e773-46a3-94bf-cd280f3c2b27" (UID: "3222cd1a-e773-46a3-94bf-cd280f3c2b27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:12:51 crc kubenswrapper[4705]: I0124 08:12:51.801281 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3222cd1a-e773-46a3-94bf-cd280f3c2b27-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.079200 4705 generic.go:334] "Generic (PLEG): container finished" podID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerID="9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90" exitCode=0 Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.079248 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6htlg" event={"ID":"3222cd1a-e773-46a3-94bf-cd280f3c2b27","Type":"ContainerDied","Data":"9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90"} Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.079279 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6htlg" event={"ID":"3222cd1a-e773-46a3-94bf-cd280f3c2b27","Type":"ContainerDied","Data":"e7b37576d0f95b714e79c81d70ae19feca34629fab0e53f5612684402ab9371f"} Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.079276 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6htlg" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.079295 4705 scope.go:117] "RemoveContainer" containerID="9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.104629 4705 scope.go:117] "RemoveContainer" containerID="efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.123660 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6htlg"] Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.131512 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6htlg"] Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.145132 4705 scope.go:117] "RemoveContainer" containerID="97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.171769 4705 scope.go:117] "RemoveContainer" containerID="9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90" Jan 24 08:12:52 crc kubenswrapper[4705]: E0124 08:12:52.172329 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90\": container with ID starting with 9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90 not found: ID does not exist" containerID="9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.172385 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90"} err="failed to get container status \"9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90\": rpc error: code = NotFound desc = could not find container \"9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90\": container with ID starting with 9700fb886c5c15c39d3197266340b3390c1dade291dcfb2b1c25e14fe9a33b90 not found: ID does not exist" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.172414 4705 scope.go:117] "RemoveContainer" containerID="efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb" Jan 24 08:12:52 crc kubenswrapper[4705]: E0124 08:12:52.172737 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb\": container with ID starting with efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb not found: ID does not exist" containerID="efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.172769 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb"} err="failed to get container status \"efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb\": rpc error: code = NotFound desc = could not find container \"efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb\": container with ID starting with efa2f22d23dacaf20c32d72d75bb8b396820ff82204b623af9d10ed783453deb not found: ID does not exist" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.172792 4705 scope.go:117] "RemoveContainer" containerID="97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077" Jan 24 08:12:52 crc kubenswrapper[4705]: E0124 08:12:52.172997 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077\": container with ID starting with 97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077 not found: ID does not exist" containerID="97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077" Jan 24 08:12:52 crc kubenswrapper[4705]: I0124 08:12:52.173020 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077"} err="failed to get container status \"97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077\": rpc error: code = NotFound desc = could not find container \"97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077\": container with ID starting with 97d1381832a3e1ccb0e116536794e03657ca8fdc2bc301c53c649c019d89f077 not found: ID does not exist" Jan 24 08:12:53 crc kubenswrapper[4705]: I0124 08:12:53.589098 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" path="/var/lib/kubelet/pods/3222cd1a-e773-46a3-94bf-cd280f3c2b27/volumes" Jan 24 08:12:58 crc kubenswrapper[4705]: I0124 08:12:58.575389 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:12:58 crc kubenswrapper[4705]: E0124 08:12:58.577434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:13:05 crc kubenswrapper[4705]: I0124 08:13:05.995836 4705 scope.go:117] "RemoveContainer" containerID="e80d6cd79edee3a05bdaf723c476e8d24e9a85f1bd291743e5c0e0e55c7221e3" Jan 24 08:13:06 crc kubenswrapper[4705]: I0124 08:13:06.051507 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bh2tb"] Jan 24 08:13:06 crc kubenswrapper[4705]: I0124 08:13:06.060370 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bh2tb"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.033134 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pwc24"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.052600 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pwc24"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.064751 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tnl2q"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.073678 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d619-account-create-update-t2827"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.081775 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tnl2q"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.089776 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5829-account-create-update-wq6kb"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.097177 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8111-account-create-update-29vdk"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.106075 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d619-account-create-update-t2827"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.114546 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5829-account-create-update-wq6kb"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.123309 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8111-account-create-update-29vdk"] Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.587450 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3599801a-a9fe-4efe-9f2d-5af2507a76ed" path="/var/lib/kubelet/pods/3599801a-a9fe-4efe-9f2d-5af2507a76ed/volumes" Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.588457 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668dd17c-3a2a-48b7-967a-a06a2b9a1192" path="/var/lib/kubelet/pods/668dd17c-3a2a-48b7-967a-a06a2b9a1192/volumes" Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.589272 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c02cc76-2478-43fb-a558-70fa32999210" path="/var/lib/kubelet/pods/9c02cc76-2478-43fb-a558-70fa32999210/volumes" Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.590172 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41ea1a4-fd22-4665-83e1-6e06bae9aeb5" path="/var/lib/kubelet/pods/b41ea1a4-fd22-4665-83e1-6e06bae9aeb5/volumes" Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.592089 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb2778d-90da-49fd-ba99-d1d32efd55c4" path="/var/lib/kubelet/pods/cbb2778d-90da-49fd-ba99-d1d32efd55c4/volumes" Jan 24 08:13:07 crc kubenswrapper[4705]: I0124 08:13:07.593009 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe428932-5a60-4f4d-baf2-a970bd94e6e4" path="/var/lib/kubelet/pods/fe428932-5a60-4f4d-baf2-a970bd94e6e4/volumes" Jan 24 08:13:11 crc kubenswrapper[4705]: I0124 08:13:11.583268 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:13:12 crc kubenswrapper[4705]: I0124 08:13:12.262566 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"e75dab5ce4ca72aff4b9b317d9abfccd45194eb72cb1bff0efc28fc55dba4e9f"} Jan 24 08:13:28 crc kubenswrapper[4705]: I0124 08:13:28.445380 4705 generic.go:334] "Generic (PLEG): container finished" podID="f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516" containerID="1428167278c95ae8cc948eae2ce50b156a658ad74fa743ce78929e893212c9ca" exitCode=0 Jan 24 08:13:28 crc kubenswrapper[4705]: I0124 08:13:28.445928 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" event={"ID":"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516","Type":"ContainerDied","Data":"1428167278c95ae8cc948eae2ce50b156a658ad74fa743ce78929e893212c9ca"} Jan 24 08:13:29 crc kubenswrapper[4705]: I0124 08:13:29.976092 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.166204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-ssh-key-openstack-edpm-ipam\") pod \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.166540 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-inventory\") pod \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.166579 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54gnz\" (UniqueName: \"kubernetes.io/projected/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-kube-api-access-54gnz\") pod \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\" (UID: \"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516\") " Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.180035 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-kube-api-access-54gnz" (OuterVolumeSpecName: "kube-api-access-54gnz") pod "f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516" (UID: "f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516"). InnerVolumeSpecName "kube-api-access-54gnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.193670 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-inventory" (OuterVolumeSpecName: "inventory") pod "f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516" (UID: "f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.202265 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516" (UID: "f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.269063 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.269401 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.269417 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54gnz\" (UniqueName: \"kubernetes.io/projected/f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516-kube-api-access-54gnz\") on node \"crc\" DevicePath \"\"" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.464850 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" event={"ID":"f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516","Type":"ContainerDied","Data":"d7c0461c154d35703d43b974a04d5f0148e90c0d3f39a6a11a0e44b1ee95ac3c"} Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.464889 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c0461c154d35703d43b974a04d5f0148e90c0d3f39a6a11a0e44b1ee95ac3c" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.464900 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.563354 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps"] Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.563868 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.563891 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.563923 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="extract-content" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.563932 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="extract-content" Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.563949 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="registry-server" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.563957 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="registry-server" Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.563972 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="registry-server" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.563980 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="registry-server" Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.564000 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="extract-utilities" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.564010 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="extract-utilities" Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.564030 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="extract-utilities" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.564038 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="extract-utilities" Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.564053 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="extract-content" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.564060 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="extract-content" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.564302 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bde3532-50d9-4b58-bdbb-912b1fe4d251" containerName="registry-server" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.564325 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.564339 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3222cd1a-e773-46a3-94bf-cd280f3c2b27" containerName="registry-server" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.565404 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.569156 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.569435 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.569704 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.569881 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.586803 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps"] Jan 24 08:13:30 crc kubenswrapper[4705]: E0124 08:13:30.606471 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0eb2c_3a7c_4b3d_9cb7_b2c46beec516.slice/crio-d7c0461c154d35703d43b974a04d5f0148e90c0d3f39a6a11a0e44b1ee95ac3c\": RecentStats: unable to find data in memory cache]" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.676230 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5x9\" (UniqueName: \"kubernetes.io/projected/e6982665-cdec-4e7c-b9d1-0c7532cf8830-kube-api-access-rs5x9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.676969 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.677021 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.779541 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5x9\" (UniqueName: \"kubernetes.io/projected/e6982665-cdec-4e7c-b9d1-0c7532cf8830-kube-api-access-rs5x9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.779619 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.779657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.784291 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.786263 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.799733 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5x9\" (UniqueName: \"kubernetes.io/projected/e6982665-cdec-4e7c-b9d1-0c7532cf8830-kube-api-access-rs5x9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z4xps\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:30 crc kubenswrapper[4705]: I0124 08:13:30.890491 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:31 crc kubenswrapper[4705]: I0124 08:13:31.538254 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps"] Jan 24 08:13:31 crc kubenswrapper[4705]: W0124 08:13:31.540180 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6982665_cdec_4e7c_b9d1_0c7532cf8830.slice/crio-463948f374e01eab544648bcdd25ea2ee1c6a96b5e81de8c5b98db5335bb8eac WatchSource:0}: Error finding container 463948f374e01eab544648bcdd25ea2ee1c6a96b5e81de8c5b98db5335bb8eac: Status 404 returned error can't find the container with id 463948f374e01eab544648bcdd25ea2ee1c6a96b5e81de8c5b98db5335bb8eac Jan 24 08:13:32 crc kubenswrapper[4705]: I0124 08:13:32.488499 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" event={"ID":"e6982665-cdec-4e7c-b9d1-0c7532cf8830","Type":"ContainerStarted","Data":"8138306ce168255169564993d5089c8914020def90c58cc76fb5abc69963ffcf"} Jan 24 08:13:32 crc kubenswrapper[4705]: I0124 08:13:32.488813 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" event={"ID":"e6982665-cdec-4e7c-b9d1-0c7532cf8830","Type":"ContainerStarted","Data":"463948f374e01eab544648bcdd25ea2ee1c6a96b5e81de8c5b98db5335bb8eac"} Jan 24 08:13:32 crc kubenswrapper[4705]: I0124 08:13:32.513073 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" podStartSLOduration=1.822677687 podStartE2EDuration="2.513050193s" podCreationTimestamp="2026-01-24 08:13:30 +0000 UTC" firstStartedPulling="2026-01-24 08:13:31.545107205 +0000 UTC m=+1950.264980493" lastFinishedPulling="2026-01-24 08:13:32.235479711 +0000 UTC m=+1950.955352999" observedRunningTime="2026-01-24 08:13:32.503637276 +0000 UTC m=+1951.223510584" watchObservedRunningTime="2026-01-24 08:13:32.513050193 +0000 UTC m=+1951.232923481" Jan 24 08:13:38 crc kubenswrapper[4705]: I0124 08:13:38.542623 4705 generic.go:334] "Generic (PLEG): container finished" podID="e6982665-cdec-4e7c-b9d1-0c7532cf8830" containerID="8138306ce168255169564993d5089c8914020def90c58cc76fb5abc69963ffcf" exitCode=0 Jan 24 08:13:38 crc kubenswrapper[4705]: I0124 08:13:38.542727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" event={"ID":"e6982665-cdec-4e7c-b9d1-0c7532cf8830","Type":"ContainerDied","Data":"8138306ce168255169564993d5089c8914020def90c58cc76fb5abc69963ffcf"} Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.088185 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.096623 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs5x9\" (UniqueName: \"kubernetes.io/projected/e6982665-cdec-4e7c-b9d1-0c7532cf8830-kube-api-access-rs5x9\") pod \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.096721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-ssh-key-openstack-edpm-ipam\") pod \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.096751 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-inventory\") pod \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\" (UID: \"e6982665-cdec-4e7c-b9d1-0c7532cf8830\") " Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.110673 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6982665-cdec-4e7c-b9d1-0c7532cf8830-kube-api-access-rs5x9" (OuterVolumeSpecName: "kube-api-access-rs5x9") pod "e6982665-cdec-4e7c-b9d1-0c7532cf8830" (UID: "e6982665-cdec-4e7c-b9d1-0c7532cf8830"). InnerVolumeSpecName "kube-api-access-rs5x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.138470 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-inventory" (OuterVolumeSpecName: "inventory") pod "e6982665-cdec-4e7c-b9d1-0c7532cf8830" (UID: "e6982665-cdec-4e7c-b9d1-0c7532cf8830"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.143454 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6982665-cdec-4e7c-b9d1-0c7532cf8830" (UID: "e6982665-cdec-4e7c-b9d1-0c7532cf8830"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.198626 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs5x9\" (UniqueName: \"kubernetes.io/projected/e6982665-cdec-4e7c-b9d1-0c7532cf8830-kube-api-access-rs5x9\") on node \"crc\" DevicePath \"\"" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.198665 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.198676 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6982665-cdec-4e7c-b9d1-0c7532cf8830-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.561427 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" event={"ID":"e6982665-cdec-4e7c-b9d1-0c7532cf8830","Type":"ContainerDied","Data":"463948f374e01eab544648bcdd25ea2ee1c6a96b5e81de8c5b98db5335bb8eac"} Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.561467 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z4xps" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.561479 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463948f374e01eab544648bcdd25ea2ee1c6a96b5e81de8c5b98db5335bb8eac" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.650547 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw"] Jan 24 08:13:40 crc kubenswrapper[4705]: E0124 08:13:40.651102 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6982665-cdec-4e7c-b9d1-0c7532cf8830" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.651129 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6982665-cdec-4e7c-b9d1-0c7532cf8830" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.651377 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6982665-cdec-4e7c-b9d1-0c7532cf8830" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.656247 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.659545 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.660432 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.660574 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.666452 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.670776 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw"] Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.809296 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.809581 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.809647 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gms4c\" (UniqueName: \"kubernetes.io/projected/751e9e62-9148-48d5-9630-158d42b6b78d-kube-api-access-gms4c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.911337 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.911795 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gms4c\" (UniqueName: \"kubernetes.io/projected/751e9e62-9148-48d5-9630-158d42b6b78d-kube-api-access-gms4c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.912411 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.918499 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.918584 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.931426 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gms4c\" (UniqueName: \"kubernetes.io/projected/751e9e62-9148-48d5-9630-158d42b6b78d-kube-api-access-gms4c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5thzw\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:40 crc kubenswrapper[4705]: I0124 08:13:40.976522 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:13:41 crc kubenswrapper[4705]: I0124 08:13:41.520308 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw"] Jan 24 08:13:41 crc kubenswrapper[4705]: I0124 08:13:41.571991 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" event={"ID":"751e9e62-9148-48d5-9630-158d42b6b78d","Type":"ContainerStarted","Data":"52878e814517d03d4ca24e9cb0da39c890bb0b9bcf3d15c2363c83595334365e"} Jan 24 08:13:42 crc kubenswrapper[4705]: I0124 08:13:42.582021 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" event={"ID":"751e9e62-9148-48d5-9630-158d42b6b78d","Type":"ContainerStarted","Data":"35fcfe73244fa07ac577d5d0ecdbbe80d5e111708226be3853a9811fdb70936c"} Jan 24 08:13:42 crc kubenswrapper[4705]: I0124 08:13:42.612228 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" podStartSLOduration=2.145293089 podStartE2EDuration="2.612204105s" podCreationTimestamp="2026-01-24 08:13:40 +0000 UTC" firstStartedPulling="2026-01-24 08:13:41.527102459 +0000 UTC m=+1960.246975747" lastFinishedPulling="2026-01-24 08:13:41.994013475 +0000 UTC m=+1960.713886763" observedRunningTime="2026-01-24 08:13:42.609017585 +0000 UTC m=+1961.328890873" watchObservedRunningTime="2026-01-24 08:13:42.612204105 +0000 UTC m=+1961.332077393" Jan 24 08:13:55 crc kubenswrapper[4705]: I0124 08:13:55.044089 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8qzw5"] Jan 24 08:13:55 crc kubenswrapper[4705]: I0124 08:13:55.052713 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8qzw5"] Jan 24 08:13:55 crc kubenswrapper[4705]: I0124 08:13:55.589101 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedd21a3-3cd7-415b-b600-7510e930b2a5" path="/var/lib/kubelet/pods/cedd21a3-3cd7-415b-b600-7510e930b2a5/volumes" Jan 24 08:14:06 crc kubenswrapper[4705]: I0124 08:14:06.107003 4705 scope.go:117] "RemoveContainer" containerID="e7711991b75acc17d904e4925d8e0fa40021318256631e5cfec4786d9a29b4d9" Jan 24 08:14:06 crc kubenswrapper[4705]: I0124 08:14:06.137221 4705 scope.go:117] "RemoveContainer" containerID="215a6a02d3bb2622babeeb6ba7329154d689243e931ea91550e012fa5202613e" Jan 24 08:14:06 crc kubenswrapper[4705]: I0124 08:14:06.217645 4705 scope.go:117] "RemoveContainer" containerID="c9db6cb5d44ff1811594d4fc63402bd6f262227537974c0126beb810e1471bcc" Jan 24 08:14:06 crc kubenswrapper[4705]: I0124 08:14:06.272021 4705 scope.go:117] "RemoveContainer" containerID="f3f91d9cdb51d1fa3152c720dccb74e5776c9e85c36ca527d6a5f627c07567e9" Jan 24 08:14:06 crc kubenswrapper[4705]: I0124 08:14:06.311720 4705 scope.go:117] "RemoveContainer" containerID="dc79bb1062b394b73060f2c208f26d8ae56448e927153af0ff7ce67e1bf9f7f1" Jan 24 08:14:06 crc kubenswrapper[4705]: I0124 08:14:06.352530 4705 scope.go:117] "RemoveContainer" containerID="775a4bc540b450bf870ac68f95f1e3738c96e27bafb77bdaf59cd11634831fbf" Jan 24 08:14:06 crc kubenswrapper[4705]: I0124 08:14:06.393422 4705 scope.go:117] "RemoveContainer" containerID="dcb7daf75c54f75e30e62afd505ec77cc8c079521a44eae4e9d57eca99552364" Jan 24 08:14:17 crc kubenswrapper[4705]: I0124 08:14:17.032530 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4f7gt"] Jan 24 08:14:17 crc kubenswrapper[4705]: I0124 08:14:17.042547 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4f7gt"] Jan 24 08:14:17 crc kubenswrapper[4705]: I0124 08:14:17.586637 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b478974-9991-4090-89ea-48d3953d822d" path="/var/lib/kubelet/pods/1b478974-9991-4090-89ea-48d3953d822d/volumes" Jan 24 08:14:19 crc kubenswrapper[4705]: I0124 08:14:19.027302 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-c96dk"] Jan 24 08:14:19 crc kubenswrapper[4705]: I0124 08:14:19.034963 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-c96dk"] Jan 24 08:14:19 crc kubenswrapper[4705]: I0124 08:14:19.590492 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1666f128-9c4f-4a59-8210-bf5783b38f5f" path="/var/lib/kubelet/pods/1666f128-9c4f-4a59-8210-bf5783b38f5f/volumes" Jan 24 08:14:24 crc kubenswrapper[4705]: I0124 08:14:24.001842 4705 generic.go:334] "Generic (PLEG): container finished" podID="751e9e62-9148-48d5-9630-158d42b6b78d" containerID="35fcfe73244fa07ac577d5d0ecdbbe80d5e111708226be3853a9811fdb70936c" exitCode=0 Jan 24 08:14:24 crc kubenswrapper[4705]: I0124 08:14:24.001933 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" event={"ID":"751e9e62-9148-48d5-9630-158d42b6b78d","Type":"ContainerDied","Data":"35fcfe73244fa07ac577d5d0ecdbbe80d5e111708226be3853a9811fdb70936c"} Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.478583 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.569198 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-inventory\") pod \"751e9e62-9148-48d5-9630-158d42b6b78d\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.569253 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-ssh-key-openstack-edpm-ipam\") pod \"751e9e62-9148-48d5-9630-158d42b6b78d\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.569326 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gms4c\" (UniqueName: \"kubernetes.io/projected/751e9e62-9148-48d5-9630-158d42b6b78d-kube-api-access-gms4c\") pod \"751e9e62-9148-48d5-9630-158d42b6b78d\" (UID: \"751e9e62-9148-48d5-9630-158d42b6b78d\") " Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.583882 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751e9e62-9148-48d5-9630-158d42b6b78d-kube-api-access-gms4c" (OuterVolumeSpecName: "kube-api-access-gms4c") pod "751e9e62-9148-48d5-9630-158d42b6b78d" (UID: "751e9e62-9148-48d5-9630-158d42b6b78d"). InnerVolumeSpecName "kube-api-access-gms4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.598482 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "751e9e62-9148-48d5-9630-158d42b6b78d" (UID: "751e9e62-9148-48d5-9630-158d42b6b78d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.602264 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-inventory" (OuterVolumeSpecName: "inventory") pod "751e9e62-9148-48d5-9630-158d42b6b78d" (UID: "751e9e62-9148-48d5-9630-158d42b6b78d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.671482 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.671565 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/751e9e62-9148-48d5-9630-158d42b6b78d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:14:25 crc kubenswrapper[4705]: I0124 08:14:25.671629 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gms4c\" (UniqueName: \"kubernetes.io/projected/751e9e62-9148-48d5-9630-158d42b6b78d-kube-api-access-gms4c\") on node \"crc\" DevicePath \"\"" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.022149 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" event={"ID":"751e9e62-9148-48d5-9630-158d42b6b78d","Type":"ContainerDied","Data":"52878e814517d03d4ca24e9cb0da39c890bb0b9bcf3d15c2363c83595334365e"} Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.022199 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52878e814517d03d4ca24e9cb0da39c890bb0b9bcf3d15c2363c83595334365e" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.022262 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5thzw" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.118079 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx"] Jan 24 08:14:26 crc kubenswrapper[4705]: E0124 08:14:26.118783 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751e9e62-9148-48d5-9630-158d42b6b78d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.118813 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="751e9e62-9148-48d5-9630-158d42b6b78d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.119126 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="751e9e62-9148-48d5-9630-158d42b6b78d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.120254 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.122706 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.123726 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.123780 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.123871 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.127756 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx"] Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.285930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.286273 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bpkl\" (UniqueName: \"kubernetes.io/projected/b3f04082-08d1-49ca-91fd-b538d81a8923-kube-api-access-8bpkl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.286416 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.387915 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.388001 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpkl\" (UniqueName: \"kubernetes.io/projected/b3f04082-08d1-49ca-91fd-b538d81a8923-kube-api-access-8bpkl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.388037 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.391835 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.391913 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.405496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpkl\" (UniqueName: \"kubernetes.io/projected/b3f04082-08d1-49ca-91fd-b538d81a8923-kube-api-access-8bpkl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.441857 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:14:26 crc kubenswrapper[4705]: I0124 08:14:26.979180 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx"] Jan 24 08:14:27 crc kubenswrapper[4705]: I0124 08:14:27.032298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" event={"ID":"b3f04082-08d1-49ca-91fd-b538d81a8923","Type":"ContainerStarted","Data":"b7fa3d681bf1e540cb799c5d6d03967f8345bd6ca3bda69029b506b7e031fa9f"} Jan 24 08:14:28 crc kubenswrapper[4705]: I0124 08:14:28.042629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" event={"ID":"b3f04082-08d1-49ca-91fd-b538d81a8923","Type":"ContainerStarted","Data":"e964af613e30f39b9913d064659dbe24b90705f08e8d6ae14efa46c942276cc6"} Jan 24 08:14:28 crc kubenswrapper[4705]: I0124 08:14:28.063219 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" podStartSLOduration=1.595716221 podStartE2EDuration="2.063200953s" podCreationTimestamp="2026-01-24 08:14:26 +0000 UTC" firstStartedPulling="2026-01-24 08:14:26.985872696 +0000 UTC m=+2005.705745984" lastFinishedPulling="2026-01-24 08:14:27.453357428 +0000 UTC m=+2006.173230716" observedRunningTime="2026-01-24 08:14:28.056330788 +0000 UTC m=+2006.776204076" watchObservedRunningTime="2026-01-24 08:14:28.063200953 +0000 UTC m=+2006.783074241" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.152374 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn"] Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.154681 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.159119 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.162230 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn"] Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.166085 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.341531 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202e4084-2245-4d25-a122-80c97f3e7824-secret-volume\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.341634 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frz27\" (UniqueName: \"kubernetes.io/projected/202e4084-2245-4d25-a122-80c97f3e7824-kube-api-access-frz27\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.341668 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202e4084-2245-4d25-a122-80c97f3e7824-config-volume\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.443186 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frz27\" (UniqueName: \"kubernetes.io/projected/202e4084-2245-4d25-a122-80c97f3e7824-kube-api-access-frz27\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.443307 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202e4084-2245-4d25-a122-80c97f3e7824-config-volume\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.443496 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202e4084-2245-4d25-a122-80c97f3e7824-secret-volume\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.444806 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202e4084-2245-4d25-a122-80c97f3e7824-config-volume\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.450362 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202e4084-2245-4d25-a122-80c97f3e7824-secret-volume\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.461307 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frz27\" (UniqueName: \"kubernetes.io/projected/202e4084-2245-4d25-a122-80c97f3e7824-kube-api-access-frz27\") pod \"collect-profiles-29487375-5h8gn\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.488480 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:00 crc kubenswrapper[4705]: I0124 08:15:00.973795 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn"] Jan 24 08:15:01 crc kubenswrapper[4705]: I0124 08:15:01.485216 4705 generic.go:334] "Generic (PLEG): container finished" podID="202e4084-2245-4d25-a122-80c97f3e7824" containerID="bacc8e34e26e38fa180dd1ce2e7dd36e7a679576d7f23960f00d28afc17202bc" exitCode=0 Jan 24 08:15:01 crc kubenswrapper[4705]: I0124 08:15:01.485267 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" event={"ID":"202e4084-2245-4d25-a122-80c97f3e7824","Type":"ContainerDied","Data":"bacc8e34e26e38fa180dd1ce2e7dd36e7a679576d7f23960f00d28afc17202bc"} Jan 24 08:15:01 crc kubenswrapper[4705]: I0124 08:15:01.485517 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" event={"ID":"202e4084-2245-4d25-a122-80c97f3e7824","Type":"ContainerStarted","Data":"96d2e2612dfda3923827ad9e288c7bc64b7fe0c6f47d8f4d78b7301ad8eb50ff"} Jan 24 08:15:02 crc kubenswrapper[4705]: I0124 08:15:02.824424 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:02 crc kubenswrapper[4705]: I0124 08:15:02.995029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frz27\" (UniqueName: \"kubernetes.io/projected/202e4084-2245-4d25-a122-80c97f3e7824-kube-api-access-frz27\") pod \"202e4084-2245-4d25-a122-80c97f3e7824\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " Jan 24 08:15:02 crc kubenswrapper[4705]: I0124 08:15:02.995295 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202e4084-2245-4d25-a122-80c97f3e7824-config-volume\") pod \"202e4084-2245-4d25-a122-80c97f3e7824\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " Jan 24 08:15:02 crc kubenswrapper[4705]: I0124 08:15:02.995326 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202e4084-2245-4d25-a122-80c97f3e7824-secret-volume\") pod \"202e4084-2245-4d25-a122-80c97f3e7824\" (UID: \"202e4084-2245-4d25-a122-80c97f3e7824\") " Jan 24 08:15:02 crc kubenswrapper[4705]: I0124 08:15:02.996111 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202e4084-2245-4d25-a122-80c97f3e7824-config-volume" (OuterVolumeSpecName: "config-volume") pod "202e4084-2245-4d25-a122-80c97f3e7824" (UID: "202e4084-2245-4d25-a122-80c97f3e7824"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.004062 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202e4084-2245-4d25-a122-80c97f3e7824-kube-api-access-frz27" (OuterVolumeSpecName: "kube-api-access-frz27") pod "202e4084-2245-4d25-a122-80c97f3e7824" (UID: "202e4084-2245-4d25-a122-80c97f3e7824"). InnerVolumeSpecName "kube-api-access-frz27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.005428 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202e4084-2245-4d25-a122-80c97f3e7824-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "202e4084-2245-4d25-a122-80c97f3e7824" (UID: "202e4084-2245-4d25-a122-80c97f3e7824"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.058961 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9xrj"] Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.071553 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9xrj"] Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.097749 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202e4084-2245-4d25-a122-80c97f3e7824-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.097786 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/202e4084-2245-4d25-a122-80c97f3e7824-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.097797 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frz27\" (UniqueName: \"kubernetes.io/projected/202e4084-2245-4d25-a122-80c97f3e7824-kube-api-access-frz27\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.510918 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" event={"ID":"202e4084-2245-4d25-a122-80c97f3e7824","Type":"ContainerDied","Data":"96d2e2612dfda3923827ad9e288c7bc64b7fe0c6f47d8f4d78b7301ad8eb50ff"} Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.511301 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d2e2612dfda3923827ad9e288c7bc64b7fe0c6f47d8f4d78b7301ad8eb50ff" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.511154 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.590746 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f6fcc7-6bf4-4313-95f7-40e61821d3f1" path="/var/lib/kubelet/pods/43f6fcc7-6bf4-4313-95f7-40e61821d3f1/volumes" Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.887346 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl"] Jan 24 08:15:03 crc kubenswrapper[4705]: I0124 08:15:03.894263 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-7xnrl"] Jan 24 08:15:05 crc kubenswrapper[4705]: I0124 08:15:05.586695 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4379ba-31cd-436c-b1a6-8e715c0d2dca" path="/var/lib/kubelet/pods/da4379ba-31cd-436c-b1a6-8e715c0d2dca/volumes" Jan 24 08:15:06 crc kubenswrapper[4705]: I0124 08:15:06.534786 4705 scope.go:117] "RemoveContainer" containerID="ee1a8586524a1467178aca761783c32fe24e9781ace14a6ed5f8dbe2c9030d6e" Jan 24 08:15:06 crc kubenswrapper[4705]: I0124 08:15:06.560256 4705 scope.go:117] "RemoveContainer" containerID="bc5221212fcca4b95bc82323a40309f34e7018101e7a9458edeba5e9de1d7729" Jan 24 08:15:06 crc kubenswrapper[4705]: I0124 08:15:06.625808 4705 scope.go:117] "RemoveContainer" containerID="04f6ddb6c4fb2e6a5b77925db5acab3ef76be16b6af313f9fb338900bea6ef1f" Jan 24 08:15:06 crc kubenswrapper[4705]: I0124 08:15:06.749400 4705 scope.go:117] "RemoveContainer" containerID="9cfaa321eb3510f5b6f3715f018d0012eae20cca35fa70c0c173345bb08e4a41" Jan 24 08:15:27 crc kubenswrapper[4705]: I0124 08:15:27.712219 4705 generic.go:334] "Generic (PLEG): container finished" podID="b3f04082-08d1-49ca-91fd-b538d81a8923" containerID="e964af613e30f39b9913d064659dbe24b90705f08e8d6ae14efa46c942276cc6" exitCode=0 Jan 24 08:15:27 crc kubenswrapper[4705]: I0124 08:15:27.712306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" event={"ID":"b3f04082-08d1-49ca-91fd-b538d81a8923","Type":"ContainerDied","Data":"e964af613e30f39b9913d064659dbe24b90705f08e8d6ae14efa46c942276cc6"} Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.168532 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.249002 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bpkl\" (UniqueName: \"kubernetes.io/projected/b3f04082-08d1-49ca-91fd-b538d81a8923-kube-api-access-8bpkl\") pod \"b3f04082-08d1-49ca-91fd-b538d81a8923\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.249105 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-ssh-key-openstack-edpm-ipam\") pod \"b3f04082-08d1-49ca-91fd-b538d81a8923\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.249172 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-inventory\") pod \"b3f04082-08d1-49ca-91fd-b538d81a8923\" (UID: \"b3f04082-08d1-49ca-91fd-b538d81a8923\") " Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.257354 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f04082-08d1-49ca-91fd-b538d81a8923-kube-api-access-8bpkl" (OuterVolumeSpecName: "kube-api-access-8bpkl") pod "b3f04082-08d1-49ca-91fd-b538d81a8923" (UID: "b3f04082-08d1-49ca-91fd-b538d81a8923"). InnerVolumeSpecName "kube-api-access-8bpkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.282052 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3f04082-08d1-49ca-91fd-b538d81a8923" (UID: "b3f04082-08d1-49ca-91fd-b538d81a8923"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.285257 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-inventory" (OuterVolumeSpecName: "inventory") pod "b3f04082-08d1-49ca-91fd-b538d81a8923" (UID: "b3f04082-08d1-49ca-91fd-b538d81a8923"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.351550 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.351979 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bpkl\" (UniqueName: \"kubernetes.io/projected/b3f04082-08d1-49ca-91fd-b538d81a8923-kube-api-access-8bpkl\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.351992 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3f04082-08d1-49ca-91fd-b538d81a8923-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.734026 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" event={"ID":"b3f04082-08d1-49ca-91fd-b538d81a8923","Type":"ContainerDied","Data":"b7fa3d681bf1e540cb799c5d6d03967f8345bd6ca3bda69029b506b7e031fa9f"} Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.734068 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7fa3d681bf1e540cb799c5d6d03967f8345bd6ca3bda69029b506b7e031fa9f" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.734087 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.842011 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pqr72"] Jan 24 08:15:29 crc kubenswrapper[4705]: E0124 08:15:29.842563 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202e4084-2245-4d25-a122-80c97f3e7824" containerName="collect-profiles" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.842588 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="202e4084-2245-4d25-a122-80c97f3e7824" containerName="collect-profiles" Jan 24 08:15:29 crc kubenswrapper[4705]: E0124 08:15:29.842642 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f04082-08d1-49ca-91fd-b538d81a8923" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.842653 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f04082-08d1-49ca-91fd-b538d81a8923" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.842933 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f04082-08d1-49ca-91fd-b538d81a8923" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.842961 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="202e4084-2245-4d25-a122-80c97f3e7824" containerName="collect-profiles" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.843799 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.849673 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.849850 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.849896 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.849863 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.854539 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pqr72"] Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.963662 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlsd\" (UniqueName: \"kubernetes.io/projected/23b20cce-9e55-4a5e-b3ba-72526a662b7d-kube-api-access-2wlsd\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.963722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:29 crc kubenswrapper[4705]: I0124 08:15:29.963804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.065883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wlsd\" (UniqueName: \"kubernetes.io/projected/23b20cce-9e55-4a5e-b3ba-72526a662b7d-kube-api-access-2wlsd\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.065955 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.066050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.070471 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.070888 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.080662 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wlsd\" (UniqueName: \"kubernetes.io/projected/23b20cce-9e55-4a5e-b3ba-72526a662b7d-kube-api-access-2wlsd\") pod \"ssh-known-hosts-edpm-deployment-pqr72\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.168909 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.740223 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pqr72"] Jan 24 08:15:30 crc kubenswrapper[4705]: I0124 08:15:30.754841 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:15:31 crc kubenswrapper[4705]: I0124 08:15:31.770440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" event={"ID":"23b20cce-9e55-4a5e-b3ba-72526a662b7d","Type":"ContainerStarted","Data":"728744eee5809d5531dcf368b4e7df684b20ba30ca63e1236bd2d0fe1d29acc3"} Jan 24 08:15:31 crc kubenswrapper[4705]: I0124 08:15:31.771001 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" event={"ID":"23b20cce-9e55-4a5e-b3ba-72526a662b7d","Type":"ContainerStarted","Data":"fd01eb326d5023e70b3dd7527b0aaaa6e8916a5435f030b7c2250b55b8e6f6be"} Jan 24 08:15:32 crc kubenswrapper[4705]: I0124 08:15:32.803143 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" podStartSLOduration=3.168691305 podStartE2EDuration="3.803119214s" podCreationTimestamp="2026-01-24 08:15:29 +0000 UTC" firstStartedPulling="2026-01-24 08:15:30.754483509 +0000 UTC m=+2069.474356797" lastFinishedPulling="2026-01-24 08:15:31.388911418 +0000 UTC m=+2070.108784706" observedRunningTime="2026-01-24 08:15:32.795559308 +0000 UTC m=+2071.515432596" watchObservedRunningTime="2026-01-24 08:15:32.803119214 +0000 UTC m=+2071.522992492" Jan 24 08:15:37 crc kubenswrapper[4705]: I0124 08:15:37.071873 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:15:37 crc kubenswrapper[4705]: I0124 08:15:37.073731 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:15:39 crc kubenswrapper[4705]: I0124 08:15:39.838347 4705 generic.go:334] "Generic (PLEG): container finished" podID="23b20cce-9e55-4a5e-b3ba-72526a662b7d" containerID="728744eee5809d5531dcf368b4e7df684b20ba30ca63e1236bd2d0fe1d29acc3" exitCode=0 Jan 24 08:15:39 crc kubenswrapper[4705]: I0124 08:15:39.838435 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" event={"ID":"23b20cce-9e55-4a5e-b3ba-72526a662b7d","Type":"ContainerDied","Data":"728744eee5809d5531dcf368b4e7df684b20ba30ca63e1236bd2d0fe1d29acc3"} Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.332704 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.382180 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-inventory-0\") pod \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.382270 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wlsd\" (UniqueName: \"kubernetes.io/projected/23b20cce-9e55-4a5e-b3ba-72526a662b7d-kube-api-access-2wlsd\") pod \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.382421 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-ssh-key-openstack-edpm-ipam\") pod \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\" (UID: \"23b20cce-9e55-4a5e-b3ba-72526a662b7d\") " Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.388449 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b20cce-9e55-4a5e-b3ba-72526a662b7d-kube-api-access-2wlsd" (OuterVolumeSpecName: "kube-api-access-2wlsd") pod "23b20cce-9e55-4a5e-b3ba-72526a662b7d" (UID: "23b20cce-9e55-4a5e-b3ba-72526a662b7d"). InnerVolumeSpecName "kube-api-access-2wlsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.413147 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "23b20cce-9e55-4a5e-b3ba-72526a662b7d" (UID: "23b20cce-9e55-4a5e-b3ba-72526a662b7d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.414463 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23b20cce-9e55-4a5e-b3ba-72526a662b7d" (UID: "23b20cce-9e55-4a5e-b3ba-72526a662b7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.484947 4705 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.484992 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wlsd\" (UniqueName: \"kubernetes.io/projected/23b20cce-9e55-4a5e-b3ba-72526a662b7d-kube-api-access-2wlsd\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.485009 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b20cce-9e55-4a5e-b3ba-72526a662b7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.857345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" event={"ID":"23b20cce-9e55-4a5e-b3ba-72526a662b7d","Type":"ContainerDied","Data":"fd01eb326d5023e70b3dd7527b0aaaa6e8916a5435f030b7c2250b55b8e6f6be"} Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.857392 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd01eb326d5023e70b3dd7527b0aaaa6e8916a5435f030b7c2250b55b8e6f6be" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.857456 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pqr72" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.940053 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k"] Jan 24 08:15:41 crc kubenswrapper[4705]: E0124 08:15:41.940607 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b20cce-9e55-4a5e-b3ba-72526a662b7d" containerName="ssh-known-hosts-edpm-deployment" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.940631 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b20cce-9e55-4a5e-b3ba-72526a662b7d" containerName="ssh-known-hosts-edpm-deployment" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.940878 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b20cce-9e55-4a5e-b3ba-72526a662b7d" containerName="ssh-known-hosts-edpm-deployment" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.942684 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.947897 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.947916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.948080 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.948574 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.954313 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k"] Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.996454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.996533 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:41 crc kubenswrapper[4705]: I0124 08:15:41.996996 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5jc\" (UniqueName: \"kubernetes.io/projected/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-kube-api-access-cx5jc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.099215 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5jc\" (UniqueName: \"kubernetes.io/projected/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-kube-api-access-cx5jc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.099400 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.099448 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.106700 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.108314 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.115759 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5jc\" (UniqueName: \"kubernetes.io/projected/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-kube-api-access-cx5jc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xdc7k\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.270772 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.800521 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k"] Jan 24 08:15:42 crc kubenswrapper[4705]: I0124 08:15:42.868324 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" event={"ID":"8c2d1fb0-3187-4f07-bc44-d3c81689b09e","Type":"ContainerStarted","Data":"d5e03bad00b29acd1f2584f369f040f8295d18a74589ee8cb56f77e93ac61df6"} Jan 24 08:15:43 crc kubenswrapper[4705]: I0124 08:15:43.878043 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" event={"ID":"8c2d1fb0-3187-4f07-bc44-d3c81689b09e","Type":"ContainerStarted","Data":"0475ede88e10c77f72ffac2514731d779f83af05a8e557f8a3cade9eb0be95e0"} Jan 24 08:15:43 crc kubenswrapper[4705]: I0124 08:15:43.899745 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" podStartSLOduration=2.461395321 podStartE2EDuration="2.899715846s" podCreationTimestamp="2026-01-24 08:15:41 +0000 UTC" firstStartedPulling="2026-01-24 08:15:42.797689019 +0000 UTC m=+2081.517562307" lastFinishedPulling="2026-01-24 08:15:43.236009544 +0000 UTC m=+2081.955882832" observedRunningTime="2026-01-24 08:15:43.899478909 +0000 UTC m=+2082.619352207" watchObservedRunningTime="2026-01-24 08:15:43.899715846 +0000 UTC m=+2082.619589134" Jan 24 08:15:53 crc kubenswrapper[4705]: I0124 08:15:53.961957 4705 generic.go:334] "Generic (PLEG): container finished" podID="8c2d1fb0-3187-4f07-bc44-d3c81689b09e" containerID="0475ede88e10c77f72ffac2514731d779f83af05a8e557f8a3cade9eb0be95e0" exitCode=0 Jan 24 08:15:53 crc kubenswrapper[4705]: I0124 08:15:53.962041 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" event={"ID":"8c2d1fb0-3187-4f07-bc44-d3c81689b09e","Type":"ContainerDied","Data":"0475ede88e10c77f72ffac2514731d779f83af05a8e557f8a3cade9eb0be95e0"} Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.439033 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.504411 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-inventory\") pod \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.504465 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5jc\" (UniqueName: \"kubernetes.io/projected/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-kube-api-access-cx5jc\") pod \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.504556 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-ssh-key-openstack-edpm-ipam\") pod \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\" (UID: \"8c2d1fb0-3187-4f07-bc44-d3c81689b09e\") " Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.510139 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-kube-api-access-cx5jc" (OuterVolumeSpecName: "kube-api-access-cx5jc") pod "8c2d1fb0-3187-4f07-bc44-d3c81689b09e" (UID: "8c2d1fb0-3187-4f07-bc44-d3c81689b09e"). InnerVolumeSpecName "kube-api-access-cx5jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.530984 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-inventory" (OuterVolumeSpecName: "inventory") pod "8c2d1fb0-3187-4f07-bc44-d3c81689b09e" (UID: "8c2d1fb0-3187-4f07-bc44-d3c81689b09e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.533070 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c2d1fb0-3187-4f07-bc44-d3c81689b09e" (UID: "8c2d1fb0-3187-4f07-bc44-d3c81689b09e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.613372 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.613410 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5jc\" (UniqueName: \"kubernetes.io/projected/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-kube-api-access-cx5jc\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.613424 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c2d1fb0-3187-4f07-bc44-d3c81689b09e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.980605 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" event={"ID":"8c2d1fb0-3187-4f07-bc44-d3c81689b09e","Type":"ContainerDied","Data":"d5e03bad00b29acd1f2584f369f040f8295d18a74589ee8cb56f77e93ac61df6"} Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.980654 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5e03bad00b29acd1f2584f369f040f8295d18a74589ee8cb56f77e93ac61df6" Jan 24 08:15:55 crc kubenswrapper[4705]: I0124 08:15:55.980661 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xdc7k" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.052173 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd"] Jan 24 08:15:56 crc kubenswrapper[4705]: E0124 08:15:56.052569 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2d1fb0-3187-4f07-bc44-d3c81689b09e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.052589 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2d1fb0-3187-4f07-bc44-d3c81689b09e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.052781 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2d1fb0-3187-4f07-bc44-d3c81689b09e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.053483 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.056031 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.056224 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.059639 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.059916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.065210 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd"] Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.224466 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttth\" (UniqueName: \"kubernetes.io/projected/a0eb2e96-4e56-4c71-a977-7b27892ba77c-kube-api-access-4ttth\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.225035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.225212 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.327373 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.327488 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.327599 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttth\" (UniqueName: \"kubernetes.io/projected/a0eb2e96-4e56-4c71-a977-7b27892ba77c-kube-api-access-4ttth\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.331639 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.338869 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.345155 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttth\" (UniqueName: \"kubernetes.io/projected/a0eb2e96-4e56-4c71-a977-7b27892ba77c-kube-api-access-4ttth\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.369138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.933128 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd"] Jan 24 08:15:56 crc kubenswrapper[4705]: I0124 08:15:56.989622 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" event={"ID":"a0eb2e96-4e56-4c71-a977-7b27892ba77c","Type":"ContainerStarted","Data":"bc5134a865a526499075ff56c739803fea5282cd8eeb6c798f382cf437fa77b5"} Jan 24 08:15:58 crc kubenswrapper[4705]: I0124 08:15:58.003172 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" event={"ID":"a0eb2e96-4e56-4c71-a977-7b27892ba77c","Type":"ContainerStarted","Data":"c2a77ec4e80df47c0b765c263118c32294277580c8fa98133b412a899e87912e"} Jan 24 08:16:07 crc kubenswrapper[4705]: I0124 08:16:07.071252 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:16:07 crc kubenswrapper[4705]: I0124 08:16:07.071909 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:16:10 crc kubenswrapper[4705]: I0124 08:16:10.139284 4705 generic.go:334] "Generic (PLEG): container finished" podID="a0eb2e96-4e56-4c71-a977-7b27892ba77c" containerID="c2a77ec4e80df47c0b765c263118c32294277580c8fa98133b412a899e87912e" exitCode=0 Jan 24 08:16:10 crc kubenswrapper[4705]: I0124 08:16:10.139399 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" event={"ID":"a0eb2e96-4e56-4c71-a977-7b27892ba77c","Type":"ContainerDied","Data":"c2a77ec4e80df47c0b765c263118c32294277580c8fa98133b412a899e87912e"} Jan 24 08:16:11 crc kubenswrapper[4705]: I0124 08:16:11.707348 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:16:11 crc kubenswrapper[4705]: I0124 08:16:11.903263 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-inventory\") pod \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " Jan 24 08:16:11 crc kubenswrapper[4705]: I0124 08:16:11.903435 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ttth\" (UniqueName: \"kubernetes.io/projected/a0eb2e96-4e56-4c71-a977-7b27892ba77c-kube-api-access-4ttth\") pod \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " Jan 24 08:16:11 crc kubenswrapper[4705]: I0124 08:16:11.903613 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-ssh-key-openstack-edpm-ipam\") pod \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\" (UID: \"a0eb2e96-4e56-4c71-a977-7b27892ba77c\") " Jan 24 08:16:11 crc kubenswrapper[4705]: I0124 08:16:11.908257 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0eb2e96-4e56-4c71-a977-7b27892ba77c-kube-api-access-4ttth" (OuterVolumeSpecName: "kube-api-access-4ttth") pod "a0eb2e96-4e56-4c71-a977-7b27892ba77c" (UID: "a0eb2e96-4e56-4c71-a977-7b27892ba77c"). InnerVolumeSpecName "kube-api-access-4ttth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:16:11 crc kubenswrapper[4705]: I0124 08:16:11.931911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a0eb2e96-4e56-4c71-a977-7b27892ba77c" (UID: "a0eb2e96-4e56-4c71-a977-7b27892ba77c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:16:11 crc kubenswrapper[4705]: I0124 08:16:11.932310 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-inventory" (OuterVolumeSpecName: "inventory") pod "a0eb2e96-4e56-4c71-a977-7b27892ba77c" (UID: "a0eb2e96-4e56-4c71-a977-7b27892ba77c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.005808 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ttth\" (UniqueName: \"kubernetes.io/projected/a0eb2e96-4e56-4c71-a977-7b27892ba77c-kube-api-access-4ttth\") on node \"crc\" DevicePath \"\"" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.005946 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.005965 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0eb2e96-4e56-4c71-a977-7b27892ba77c-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.159387 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" event={"ID":"a0eb2e96-4e56-4c71-a977-7b27892ba77c","Type":"ContainerDied","Data":"bc5134a865a526499075ff56c739803fea5282cd8eeb6c798f382cf437fa77b5"} Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.159444 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc5134a865a526499075ff56c739803fea5282cd8eeb6c798f382cf437fa77b5" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.159752 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.295138 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf"] Jan 24 08:16:12 crc kubenswrapper[4705]: E0124 08:16:12.295631 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0eb2e96-4e56-4c71-a977-7b27892ba77c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.295651 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0eb2e96-4e56-4c71-a977-7b27892ba77c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.296722 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0eb2e96-4e56-4c71-a977-7b27892ba77c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.297387 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.301089 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.301127 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.301515 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.301538 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.301528 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.301745 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.301900 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.303325 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.319004 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf"] Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412422 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412486 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412530 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412772 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpdc\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-kube-api-access-sbpdc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412865 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.412968 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.413011 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.413078 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.413192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.413232 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.413271 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.413322 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.515596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.515964 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.516013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.516062 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.516092 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbpdc\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-kube-api-access-sbpdc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.516121 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.516639 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.516674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.517259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.517347 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.517408 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.517445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.517492 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.517540 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.652341 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.652425 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.653039 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.653181 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.653570 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.653714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.653771 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.655904 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.657563 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.657684 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.660034 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.664794 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.671731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.671870 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbpdc\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-kube-api-access-sbpdc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-565zf\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:12 crc kubenswrapper[4705]: I0124 08:16:12.924185 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:16:13 crc kubenswrapper[4705]: I0124 08:16:13.478326 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf"] Jan 24 08:16:14 crc kubenswrapper[4705]: I0124 08:16:14.179334 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" event={"ID":"69797704-2611-4e94-8321-878049b18d9e","Type":"ContainerStarted","Data":"166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da"} Jan 24 08:16:15 crc kubenswrapper[4705]: I0124 08:16:15.193487 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" event={"ID":"69797704-2611-4e94-8321-878049b18d9e","Type":"ContainerStarted","Data":"ec64076d8d7e677bb255c579b8b5f1878d18067fed20fe6ad46113c6bcf3cc2e"} Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.071983 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.072604 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.072671 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.073737 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e75dab5ce4ca72aff4b9b317d9abfccd45194eb72cb1bff0efc28fc55dba4e9f"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.073798 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://e75dab5ce4ca72aff4b9b317d9abfccd45194eb72cb1bff0efc28fc55dba4e9f" gracePeriod=600 Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.476168 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="e75dab5ce4ca72aff4b9b317d9abfccd45194eb72cb1bff0efc28fc55dba4e9f" exitCode=0 Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.476252 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"e75dab5ce4ca72aff4b9b317d9abfccd45194eb72cb1bff0efc28fc55dba4e9f"} Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.476684 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f"} Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.476709 4705 scope.go:117] "RemoveContainer" containerID="2ed108de9e56c7c676a83b1445999f376857f3c5e32b96300536a751b0661dd7" Jan 24 08:16:37 crc kubenswrapper[4705]: I0124 08:16:37.503033 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" podStartSLOduration=24.944921265 podStartE2EDuration="25.503003045s" podCreationTimestamp="2026-01-24 08:16:12 +0000 UTC" firstStartedPulling="2026-01-24 08:16:13.486689496 +0000 UTC m=+2112.206562784" lastFinishedPulling="2026-01-24 08:16:14.044771276 +0000 UTC m=+2112.764644564" observedRunningTime="2026-01-24 08:16:15.220495197 +0000 UTC m=+2113.940368485" watchObservedRunningTime="2026-01-24 08:16:37.503003045 +0000 UTC m=+2136.222876333" Jan 24 08:16:59 crc kubenswrapper[4705]: I0124 08:16:59.700055 4705 generic.go:334] "Generic (PLEG): container finished" podID="69797704-2611-4e94-8321-878049b18d9e" containerID="ec64076d8d7e677bb255c579b8b5f1878d18067fed20fe6ad46113c6bcf3cc2e" exitCode=0 Jan 24 08:16:59 crc kubenswrapper[4705]: I0124 08:16:59.700248 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" event={"ID":"69797704-2611-4e94-8321-878049b18d9e","Type":"ContainerDied","Data":"ec64076d8d7e677bb255c579b8b5f1878d18067fed20fe6ad46113c6bcf3cc2e"} Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.154026 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.261183 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.262622 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-libvirt-combined-ca-bundle\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.262742 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ovn-combined-ca-bundle\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.262847 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.262951 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-neutron-metadata-combined-ca-bundle\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.262981 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263106 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263141 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-repo-setup-combined-ca-bundle\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263172 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-nova-combined-ca-bundle\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263218 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ssh-key-openstack-edpm-ipam\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263246 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-inventory\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263292 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-telemetry-combined-ca-bundle\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263355 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-bootstrap-combined-ca-bundle\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.263380 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbpdc\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-kube-api-access-sbpdc\") pod \"69797704-2611-4e94-8321-878049b18d9e\" (UID: \"69797704-2611-4e94-8321-878049b18d9e\") " Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.268970 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.269327 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.269991 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.270066 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.273540 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.274641 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.275928 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-kube-api-access-sbpdc" (OuterVolumeSpecName: "kube-api-access-sbpdc") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "kube-api-access-sbpdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.276003 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.275992 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.276363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.290330 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.290399 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.354018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366179 4705 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366216 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbpdc\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-kube-api-access-sbpdc\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366227 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366237 4705 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366246 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366255 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366264 4705 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366273 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366282 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/69797704-2611-4e94-8321-878049b18d9e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366290 4705 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366299 4705 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366308 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.366318 4705 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.373859 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-inventory" (OuterVolumeSpecName: "inventory") pod "69797704-2611-4e94-8321-878049b18d9e" (UID: "69797704-2611-4e94-8321-878049b18d9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.469008 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69797704-2611-4e94-8321-878049b18d9e-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.716413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" event={"ID":"69797704-2611-4e94-8321-878049b18d9e","Type":"ContainerDied","Data":"166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da"} Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.716464 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.716479 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-565zf" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.810592 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl"] Jan 24 08:17:01 crc kubenswrapper[4705]: E0124 08:17:01.811097 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69797704-2611-4e94-8321-878049b18d9e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.811128 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="69797704-2611-4e94-8321-878049b18d9e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.811468 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="69797704-2611-4e94-8321-878049b18d9e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.812405 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.817136 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.817566 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.817744 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.817895 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.817913 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.822771 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl"] Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.979593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1dfe7e12-9632-436a-b440-02c0f710ca04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.979993 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.980036 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9bqh\" (UniqueName: \"kubernetes.io/projected/1dfe7e12-9632-436a-b440-02c0f710ca04-kube-api-access-r9bqh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.980099 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:01 crc kubenswrapper[4705]: I0124 08:17:01.980119 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.082274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.082322 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.082417 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1dfe7e12-9632-436a-b440-02c0f710ca04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.082486 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.082533 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bqh\" (UniqueName: \"kubernetes.io/projected/1dfe7e12-9632-436a-b440-02c0f710ca04-kube-api-access-r9bqh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.083609 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1dfe7e12-9632-436a-b440-02c0f710ca04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.086997 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.087354 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.093729 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.099362 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bqh\" (UniqueName: \"kubernetes.io/projected/1dfe7e12-9632-436a-b440-02c0f710ca04-kube-api-access-r9bqh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bfqnl\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.144672 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:17:02 crc kubenswrapper[4705]: I0124 08:17:02.731381 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl"] Jan 24 08:17:03 crc kubenswrapper[4705]: I0124 08:17:03.786011 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" event={"ID":"1dfe7e12-9632-436a-b440-02c0f710ca04","Type":"ContainerStarted","Data":"478895b2822218a5b837d845cfb510ad4a71218c53f9222b94fdaa7f2ff10905"} Jan 24 08:17:03 crc kubenswrapper[4705]: I0124 08:17:03.786639 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" event={"ID":"1dfe7e12-9632-436a-b440-02c0f710ca04","Type":"ContainerStarted","Data":"10d71bb1ba7c2888c3720817fb4f826ce3bd689a8987574236f950c618d2705e"} Jan 24 08:17:03 crc kubenswrapper[4705]: I0124 08:17:03.811095 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" podStartSLOduration=2.270278623 podStartE2EDuration="2.811076022s" podCreationTimestamp="2026-01-24 08:17:01 +0000 UTC" firstStartedPulling="2026-01-24 08:17:02.741929803 +0000 UTC m=+2161.461803091" lastFinishedPulling="2026-01-24 08:17:03.282727202 +0000 UTC m=+2162.002600490" observedRunningTime="2026-01-24 08:17:03.802763022 +0000 UTC m=+2162.522636310" watchObservedRunningTime="2026-01-24 08:17:03.811076022 +0000 UTC m=+2162.530949310" Jan 24 08:17:06 crc kubenswrapper[4705]: E0124 08:17:06.186316 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice/crio-166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:17:16 crc kubenswrapper[4705]: E0124 08:17:16.489166 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice/crio-166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da\": RecentStats: unable to find data in memory cache]" Jan 24 08:17:26 crc kubenswrapper[4705]: E0124 08:17:26.708460 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice/crio-166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da\": RecentStats: unable to find data in memory cache]" Jan 24 08:17:36 crc kubenswrapper[4705]: E0124 08:17:36.962570 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice/crio-166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:17:47 crc kubenswrapper[4705]: E0124 08:17:47.196782 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice/crio-166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da\": RecentStats: unable to find data in memory cache]" Jan 24 08:17:57 crc kubenswrapper[4705]: E0124 08:17:57.447568 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice/crio-166cffac1d274b873a50bd7b76290d827f6701811f2456efcea3bc7c302202da\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69797704_2611_4e94_8321_878049b18d9e.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.626035 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gzch"] Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.628899 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gzch"] Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.629024 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.696097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-catalog-content\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.696175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-utilities\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.696366 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k629s\" (UniqueName: \"kubernetes.io/projected/f4b601eb-3f7d-4aee-8587-807b77735c97-kube-api-access-k629s\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.797320 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-catalog-content\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.797386 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-utilities\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.797504 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k629s\" (UniqueName: \"kubernetes.io/projected/f4b601eb-3f7d-4aee-8587-807b77735c97-kube-api-access-k629s\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.797978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-catalog-content\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.798046 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-utilities\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.820244 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k629s\" (UniqueName: \"kubernetes.io/projected/f4b601eb-3f7d-4aee-8587-807b77735c97-kube-api-access-k629s\") pod \"community-operators-4gzch\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:17 crc kubenswrapper[4705]: I0124 08:18:17.969311 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:18 crc kubenswrapper[4705]: I0124 08:18:18.652463 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gzch"] Jan 24 08:18:18 crc kubenswrapper[4705]: W0124 08:18:18.656734 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b601eb_3f7d_4aee_8587_807b77735c97.slice/crio-cd1fbe4b397f810915a781ee62899e8307355a48efde467d187ed50c951f1629 WatchSource:0}: Error finding container cd1fbe4b397f810915a781ee62899e8307355a48efde467d187ed50c951f1629: Status 404 returned error can't find the container with id cd1fbe4b397f810915a781ee62899e8307355a48efde467d187ed50c951f1629 Jan 24 08:18:18 crc kubenswrapper[4705]: I0124 08:18:18.888392 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerID="6eb4d30a0a3cda5cd1bb2e6650a54afc8b9830b75b8687a037c88d50a433350a" exitCode=0 Jan 24 08:18:18 crc kubenswrapper[4705]: I0124 08:18:18.888497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gzch" event={"ID":"f4b601eb-3f7d-4aee-8587-807b77735c97","Type":"ContainerDied","Data":"6eb4d30a0a3cda5cd1bb2e6650a54afc8b9830b75b8687a037c88d50a433350a"} Jan 24 08:18:18 crc kubenswrapper[4705]: I0124 08:18:18.888685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gzch" event={"ID":"f4b601eb-3f7d-4aee-8587-807b77735c97","Type":"ContainerStarted","Data":"cd1fbe4b397f810915a781ee62899e8307355a48efde467d187ed50c951f1629"} Jan 24 08:18:19 crc kubenswrapper[4705]: I0124 08:18:19.900754 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gzch" event={"ID":"f4b601eb-3f7d-4aee-8587-807b77735c97","Type":"ContainerStarted","Data":"1135ace7705168aca2623558f807cb0261a89d2db9886738b5c50f7a6ca4baec"} Jan 24 08:18:19 crc kubenswrapper[4705]: I0124 08:18:19.905527 4705 generic.go:334] "Generic (PLEG): container finished" podID="1dfe7e12-9632-436a-b440-02c0f710ca04" containerID="478895b2822218a5b837d845cfb510ad4a71218c53f9222b94fdaa7f2ff10905" exitCode=0 Jan 24 08:18:19 crc kubenswrapper[4705]: I0124 08:18:19.905584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" event={"ID":"1dfe7e12-9632-436a-b440-02c0f710ca04","Type":"ContainerDied","Data":"478895b2822218a5b837d845cfb510ad4a71218c53f9222b94fdaa7f2ff10905"} Jan 24 08:18:20 crc kubenswrapper[4705]: I0124 08:18:20.934928 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerID="1135ace7705168aca2623558f807cb0261a89d2db9886738b5c50f7a6ca4baec" exitCode=0 Jan 24 08:18:20 crc kubenswrapper[4705]: I0124 08:18:20.935934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gzch" event={"ID":"f4b601eb-3f7d-4aee-8587-807b77735c97","Type":"ContainerDied","Data":"1135ace7705168aca2623558f807cb0261a89d2db9886738b5c50f7a6ca4baec"} Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.421211 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.657260 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9bqh\" (UniqueName: \"kubernetes.io/projected/1dfe7e12-9632-436a-b440-02c0f710ca04-kube-api-access-r9bqh\") pod \"1dfe7e12-9632-436a-b440-02c0f710ca04\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.657662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1dfe7e12-9632-436a-b440-02c0f710ca04-ovncontroller-config-0\") pod \"1dfe7e12-9632-436a-b440-02c0f710ca04\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.657713 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ovn-combined-ca-bundle\") pod \"1dfe7e12-9632-436a-b440-02c0f710ca04\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.657931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ssh-key-openstack-edpm-ipam\") pod \"1dfe7e12-9632-436a-b440-02c0f710ca04\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.658060 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-inventory\") pod \"1dfe7e12-9632-436a-b440-02c0f710ca04\" (UID: \"1dfe7e12-9632-436a-b440-02c0f710ca04\") " Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.664110 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfe7e12-9632-436a-b440-02c0f710ca04-kube-api-access-r9bqh" (OuterVolumeSpecName: "kube-api-access-r9bqh") pod "1dfe7e12-9632-436a-b440-02c0f710ca04" (UID: "1dfe7e12-9632-436a-b440-02c0f710ca04"). InnerVolumeSpecName "kube-api-access-r9bqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.664505 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1dfe7e12-9632-436a-b440-02c0f710ca04" (UID: "1dfe7e12-9632-436a-b440-02c0f710ca04"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.695307 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1dfe7e12-9632-436a-b440-02c0f710ca04" (UID: "1dfe7e12-9632-436a-b440-02c0f710ca04"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.698604 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfe7e12-9632-436a-b440-02c0f710ca04-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1dfe7e12-9632-436a-b440-02c0f710ca04" (UID: "1dfe7e12-9632-436a-b440-02c0f710ca04"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.700729 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-inventory" (OuterVolumeSpecName: "inventory") pod "1dfe7e12-9632-436a-b440-02c0f710ca04" (UID: "1dfe7e12-9632-436a-b440-02c0f710ca04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.760319 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9bqh\" (UniqueName: \"kubernetes.io/projected/1dfe7e12-9632-436a-b440-02c0f710ca04-kube-api-access-r9bqh\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.761159 4705 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1dfe7e12-9632-436a-b440-02c0f710ca04-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.761332 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.761380 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.761397 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dfe7e12-9632-436a-b440-02c0f710ca04-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.948011 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gzch" event={"ID":"f4b601eb-3f7d-4aee-8587-807b77735c97","Type":"ContainerStarted","Data":"eb537870ab7e8da2b19de62d974f77922080fa745300c5d0259d35ca7b26ecff"} Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.950015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" event={"ID":"1dfe7e12-9632-436a-b440-02c0f710ca04","Type":"ContainerDied","Data":"10d71bb1ba7c2888c3720817fb4f826ce3bd689a8987574236f950c618d2705e"} Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.950161 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d71bb1ba7c2888c3720817fb4f826ce3bd689a8987574236f950c618d2705e" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.950067 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bfqnl" Jan 24 08:18:21 crc kubenswrapper[4705]: I0124 08:18:21.974752 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gzch" podStartSLOduration=2.41943108 podStartE2EDuration="4.974715961s" podCreationTimestamp="2026-01-24 08:18:17 +0000 UTC" firstStartedPulling="2026-01-24 08:18:18.890332621 +0000 UTC m=+2237.610205909" lastFinishedPulling="2026-01-24 08:18:21.445617502 +0000 UTC m=+2240.165490790" observedRunningTime="2026-01-24 08:18:21.970978894 +0000 UTC m=+2240.690852192" watchObservedRunningTime="2026-01-24 08:18:21.974715961 +0000 UTC m=+2240.694589249" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.050480 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz"] Jan 24 08:18:22 crc kubenswrapper[4705]: E0124 08:18:22.050997 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfe7e12-9632-436a-b440-02c0f710ca04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.051014 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfe7e12-9632-436a-b440-02c0f710ca04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.051271 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfe7e12-9632-436a-b440-02c0f710ca04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.052201 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.059976 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz"] Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.061474 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.061628 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.061700 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.061719 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.061743 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.062145 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.169603 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.170002 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.170031 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.170081 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645g5\" (UniqueName: \"kubernetes.io/projected/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-kube-api-access-645g5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.170116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.170162 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.271449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.271810 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.272002 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.272211 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.272296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.272414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-645g5\" (UniqueName: \"kubernetes.io/projected/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-kube-api-access-645g5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.276025 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.276108 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.276300 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.276385 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.277075 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.288164 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-645g5\" (UniqueName: \"kubernetes.io/projected/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-kube-api-access-645g5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:22 crc kubenswrapper[4705]: I0124 08:18:22.379661 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:18:23 crc kubenswrapper[4705]: I0124 08:18:23.030901 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz"] Jan 24 08:18:23 crc kubenswrapper[4705]: W0124 08:18:23.034372 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f03fcbf_053b_4f3b_b96a_e7f325f36a0a.slice/crio-272818a3fe8d73e9468a64f485e50d47efae0dcb7b9a1a5be3b2a1df500b1797 WatchSource:0}: Error finding container 272818a3fe8d73e9468a64f485e50d47efae0dcb7b9a1a5be3b2a1df500b1797: Status 404 returned error can't find the container with id 272818a3fe8d73e9468a64f485e50d47efae0dcb7b9a1a5be3b2a1df500b1797 Jan 24 08:18:23 crc kubenswrapper[4705]: I0124 08:18:23.970798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" event={"ID":"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a","Type":"ContainerStarted","Data":"d0aa0aa8eac9ff3f1b30e1e49c7cb34443803be7dd6144b2b02fe5504a662a29"} Jan 24 08:18:23 crc kubenswrapper[4705]: I0124 08:18:23.971338 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" event={"ID":"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a","Type":"ContainerStarted","Data":"272818a3fe8d73e9468a64f485e50d47efae0dcb7b9a1a5be3b2a1df500b1797"} Jan 24 08:18:23 crc kubenswrapper[4705]: I0124 08:18:23.994034 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" podStartSLOduration=1.533816208 podStartE2EDuration="1.994011626s" podCreationTimestamp="2026-01-24 08:18:22 +0000 UTC" firstStartedPulling="2026-01-24 08:18:23.037664696 +0000 UTC m=+2241.757537984" lastFinishedPulling="2026-01-24 08:18:23.497860114 +0000 UTC m=+2242.217733402" observedRunningTime="2026-01-24 08:18:23.98930254 +0000 UTC m=+2242.709175838" watchObservedRunningTime="2026-01-24 08:18:23.994011626 +0000 UTC m=+2242.713884914" Jan 24 08:18:27 crc kubenswrapper[4705]: I0124 08:18:27.970032 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:27 crc kubenswrapper[4705]: I0124 08:18:27.971416 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:28 crc kubenswrapper[4705]: I0124 08:18:28.022413 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:28 crc kubenswrapper[4705]: I0124 08:18:28.070722 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:28 crc kubenswrapper[4705]: I0124 08:18:28.262086 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gzch"] Jan 24 08:18:30 crc kubenswrapper[4705]: I0124 08:18:30.026450 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4gzch" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="registry-server" containerID="cri-o://eb537870ab7e8da2b19de62d974f77922080fa745300c5d0259d35ca7b26ecff" gracePeriod=2 Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.036499 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerID="eb537870ab7e8da2b19de62d974f77922080fa745300c5d0259d35ca7b26ecff" exitCode=0 Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.036574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gzch" event={"ID":"f4b601eb-3f7d-4aee-8587-807b77735c97","Type":"ContainerDied","Data":"eb537870ab7e8da2b19de62d974f77922080fa745300c5d0259d35ca7b26ecff"} Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.597784 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.733698 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k629s\" (UniqueName: \"kubernetes.io/projected/f4b601eb-3f7d-4aee-8587-807b77735c97-kube-api-access-k629s\") pod \"f4b601eb-3f7d-4aee-8587-807b77735c97\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.733772 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-utilities\") pod \"f4b601eb-3f7d-4aee-8587-807b77735c97\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.734054 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-catalog-content\") pod \"f4b601eb-3f7d-4aee-8587-807b77735c97\" (UID: \"f4b601eb-3f7d-4aee-8587-807b77735c97\") " Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.735040 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-utilities" (OuterVolumeSpecName: "utilities") pod "f4b601eb-3f7d-4aee-8587-807b77735c97" (UID: "f4b601eb-3f7d-4aee-8587-807b77735c97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.739697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b601eb-3f7d-4aee-8587-807b77735c97-kube-api-access-k629s" (OuterVolumeSpecName: "kube-api-access-k629s") pod "f4b601eb-3f7d-4aee-8587-807b77735c97" (UID: "f4b601eb-3f7d-4aee-8587-807b77735c97"). InnerVolumeSpecName "kube-api-access-k629s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.828751 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4b601eb-3f7d-4aee-8587-807b77735c97" (UID: "f4b601eb-3f7d-4aee-8587-807b77735c97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.837622 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.837673 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k629s\" (UniqueName: \"kubernetes.io/projected/f4b601eb-3f7d-4aee-8587-807b77735c97-kube-api-access-k629s\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:31 crc kubenswrapper[4705]: I0124 08:18:31.837690 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b601eb-3f7d-4aee-8587-807b77735c97-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:18:32 crc kubenswrapper[4705]: I0124 08:18:32.051977 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gzch" event={"ID":"f4b601eb-3f7d-4aee-8587-807b77735c97","Type":"ContainerDied","Data":"cd1fbe4b397f810915a781ee62899e8307355a48efde467d187ed50c951f1629"} Jan 24 08:18:32 crc kubenswrapper[4705]: I0124 08:18:32.052403 4705 scope.go:117] "RemoveContainer" containerID="eb537870ab7e8da2b19de62d974f77922080fa745300c5d0259d35ca7b26ecff" Jan 24 08:18:32 crc kubenswrapper[4705]: I0124 08:18:32.052631 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gzch" Jan 24 08:18:32 crc kubenswrapper[4705]: I0124 08:18:32.089809 4705 scope.go:117] "RemoveContainer" containerID="1135ace7705168aca2623558f807cb0261a89d2db9886738b5c50f7a6ca4baec" Jan 24 08:18:32 crc kubenswrapper[4705]: I0124 08:18:32.092757 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gzch"] Jan 24 08:18:32 crc kubenswrapper[4705]: I0124 08:18:32.102102 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4gzch"] Jan 24 08:18:32 crc kubenswrapper[4705]: I0124 08:18:32.117922 4705 scope.go:117] "RemoveContainer" containerID="6eb4d30a0a3cda5cd1bb2e6650a54afc8b9830b75b8687a037c88d50a433350a" Jan 24 08:18:33 crc kubenswrapper[4705]: I0124 08:18:33.588377 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" path="/var/lib/kubelet/pods/f4b601eb-3f7d-4aee-8587-807b77735c97/volumes" Jan 24 08:18:37 crc kubenswrapper[4705]: I0124 08:18:37.071442 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:18:37 crc kubenswrapper[4705]: I0124 08:18:37.071842 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:19:07 crc kubenswrapper[4705]: I0124 08:19:07.071879 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:19:07 crc kubenswrapper[4705]: I0124 08:19:07.072328 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.076632 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8tpdz"] Jan 24 08:19:19 crc kubenswrapper[4705]: E0124 08:19:19.077740 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="extract-utilities" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.077759 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="extract-utilities" Jan 24 08:19:19 crc kubenswrapper[4705]: E0124 08:19:19.077785 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="registry-server" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.077793 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="registry-server" Jan 24 08:19:19 crc kubenswrapper[4705]: E0124 08:19:19.077831 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="extract-content" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.077855 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="extract-content" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.078200 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b601eb-3f7d-4aee-8587-807b77735c97" containerName="registry-server" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.079957 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.100483 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tpdz"] Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.188395 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-utilities\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.188477 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58bst\" (UniqueName: \"kubernetes.io/projected/cedd79c0-cf51-4320-866a-92af2f646b13-kube-api-access-58bst\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.189027 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-catalog-content\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.290902 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-catalog-content\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.291017 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-utilities\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.291050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58bst\" (UniqueName: \"kubernetes.io/projected/cedd79c0-cf51-4320-866a-92af2f646b13-kube-api-access-58bst\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.291733 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-catalog-content\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.292011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-utilities\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.319659 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58bst\" (UniqueName: \"kubernetes.io/projected/cedd79c0-cf51-4320-866a-92af2f646b13-kube-api-access-58bst\") pod \"certified-operators-8tpdz\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.399120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:19 crc kubenswrapper[4705]: I0124 08:19:19.957279 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tpdz"] Jan 24 08:19:20 crc kubenswrapper[4705]: I0124 08:19:20.493059 4705 generic.go:334] "Generic (PLEG): container finished" podID="cedd79c0-cf51-4320-866a-92af2f646b13" containerID="5270b4cef9f2f20dd1f8597ca32f0f2827abec5d8910fe2e69885fd63b1828fe" exitCode=0 Jan 24 08:19:20 crc kubenswrapper[4705]: I0124 08:19:20.493112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tpdz" event={"ID":"cedd79c0-cf51-4320-866a-92af2f646b13","Type":"ContainerDied","Data":"5270b4cef9f2f20dd1f8597ca32f0f2827abec5d8910fe2e69885fd63b1828fe"} Jan 24 08:19:20 crc kubenswrapper[4705]: I0124 08:19:20.493140 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tpdz" event={"ID":"cedd79c0-cf51-4320-866a-92af2f646b13","Type":"ContainerStarted","Data":"f10b63245f0ae3667de7c80fd875399b36360d46c73bad88690a20af2888c44e"} Jan 24 08:19:21 crc kubenswrapper[4705]: I0124 08:19:21.504580 4705 generic.go:334] "Generic (PLEG): container finished" podID="3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" containerID="d0aa0aa8eac9ff3f1b30e1e49c7cb34443803be7dd6144b2b02fe5504a662a29" exitCode=0 Jan 24 08:19:21 crc kubenswrapper[4705]: I0124 08:19:21.504679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" event={"ID":"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a","Type":"ContainerDied","Data":"d0aa0aa8eac9ff3f1b30e1e49c7cb34443803be7dd6144b2b02fe5504a662a29"} Jan 24 08:19:22 crc kubenswrapper[4705]: I0124 08:19:22.515217 4705 generic.go:334] "Generic (PLEG): container finished" podID="cedd79c0-cf51-4320-866a-92af2f646b13" containerID="209aa05e5a12e6e3fa10b470874bc6698381fe2fcd6f48037c77ea52a0aa0cab" exitCode=0 Jan 24 08:19:22 crc kubenswrapper[4705]: I0124 08:19:22.515304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tpdz" event={"ID":"cedd79c0-cf51-4320-866a-92af2f646b13","Type":"ContainerDied","Data":"209aa05e5a12e6e3fa10b470874bc6698381fe2fcd6f48037c77ea52a0aa0cab"} Jan 24 08:19:22 crc kubenswrapper[4705]: I0124 08:19:22.936239 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.081322 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-ssh-key-openstack-edpm-ipam\") pod \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.081453 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.081473 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-645g5\" (UniqueName: \"kubernetes.io/projected/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-kube-api-access-645g5\") pod \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.082205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-metadata-combined-ca-bundle\") pod \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.082241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-nova-metadata-neutron-config-0\") pod \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.082262 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-inventory\") pod \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\" (UID: \"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a\") " Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.087552 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-kube-api-access-645g5" (OuterVolumeSpecName: "kube-api-access-645g5") pod "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" (UID: "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a"). InnerVolumeSpecName "kube-api-access-645g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.087945 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" (UID: "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.117395 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" (UID: "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.122761 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-inventory" (OuterVolumeSpecName: "inventory") pod "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" (UID: "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.125129 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" (UID: "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.139033 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" (UID: "3f03fcbf-053b-4f3b-b96a-e7f325f36a0a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.184528 4705 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.184585 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-645g5\" (UniqueName: \"kubernetes.io/projected/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-kube-api-access-645g5\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.184603 4705 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.184616 4705 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.184628 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.184639 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f03fcbf-053b-4f3b-b96a-e7f325f36a0a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.528051 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tpdz" event={"ID":"cedd79c0-cf51-4320-866a-92af2f646b13","Type":"ContainerStarted","Data":"2471b02c02252c1cb41cacc28511260cdea18541bfa5648b97b4c25cb953a3a7"} Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.530196 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" event={"ID":"3f03fcbf-053b-4f3b-b96a-e7f325f36a0a","Type":"ContainerDied","Data":"272818a3fe8d73e9468a64f485e50d47efae0dcb7b9a1a5be3b2a1df500b1797"} Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.530247 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272818a3fe8d73e9468a64f485e50d47efae0dcb7b9a1a5be3b2a1df500b1797" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.530359 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.574376 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8tpdz" podStartSLOduration=2.111867261 podStartE2EDuration="4.574342995s" podCreationTimestamp="2026-01-24 08:19:19 +0000 UTC" firstStartedPulling="2026-01-24 08:19:20.494902246 +0000 UTC m=+2299.214775534" lastFinishedPulling="2026-01-24 08:19:22.95737798 +0000 UTC m=+2301.677251268" observedRunningTime="2026-01-24 08:19:23.553816055 +0000 UTC m=+2302.273689343" watchObservedRunningTime="2026-01-24 08:19:23.574342995 +0000 UTC m=+2302.294216283" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.695591 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs"] Jan 24 08:19:23 crc kubenswrapper[4705]: E0124 08:19:23.698654 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.698691 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.699104 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f03fcbf-053b-4f3b-b96a-e7f325f36a0a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.700064 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.704643 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.704859 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.705098 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.705239 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.705424 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.709598 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs"] Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.797009 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.797082 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.797155 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89rfz\" (UniqueName: \"kubernetes.io/projected/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-kube-api-access-89rfz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.797180 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.797480 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.898948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.899028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.899083 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89rfz\" (UniqueName: \"kubernetes.io/projected/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-kube-api-access-89rfz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.899109 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.899179 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.904042 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.906905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.907209 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.907722 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:23 crc kubenswrapper[4705]: I0124 08:19:23.921503 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89rfz\" (UniqueName: \"kubernetes.io/projected/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-kube-api-access-89rfz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:24 crc kubenswrapper[4705]: I0124 08:19:24.046559 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:19:24 crc kubenswrapper[4705]: I0124 08:19:24.704919 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs"] Jan 24 08:19:24 crc kubenswrapper[4705]: W0124 08:19:24.705505 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892c0147_b4a3_451d_9c4c_c2a0cb3cf56e.slice/crio-b99143039fd40af314bac83ec510daa644f950a72fc96f442a427468abfeba71 WatchSource:0}: Error finding container b99143039fd40af314bac83ec510daa644f950a72fc96f442a427468abfeba71: Status 404 returned error can't find the container with id b99143039fd40af314bac83ec510daa644f950a72fc96f442a427468abfeba71 Jan 24 08:19:25 crc kubenswrapper[4705]: I0124 08:19:25.547300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" event={"ID":"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e","Type":"ContainerStarted","Data":"42e5ba8f098443bbf4945bd1edffd0ad06e0bbdcf8cd04f2a94fbd5c26655240"} Jan 24 08:19:25 crc kubenswrapper[4705]: I0124 08:19:25.547609 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" event={"ID":"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e","Type":"ContainerStarted","Data":"b99143039fd40af314bac83ec510daa644f950a72fc96f442a427468abfeba71"} Jan 24 08:19:25 crc kubenswrapper[4705]: I0124 08:19:25.564072 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" podStartSLOduration=2.111592743 podStartE2EDuration="2.564042399s" podCreationTimestamp="2026-01-24 08:19:23 +0000 UTC" firstStartedPulling="2026-01-24 08:19:24.715011203 +0000 UTC m=+2303.434884491" lastFinishedPulling="2026-01-24 08:19:25.167460859 +0000 UTC m=+2303.887334147" observedRunningTime="2026-01-24 08:19:25.563705269 +0000 UTC m=+2304.283578567" watchObservedRunningTime="2026-01-24 08:19:25.564042399 +0000 UTC m=+2304.283915687" Jan 24 08:19:29 crc kubenswrapper[4705]: I0124 08:19:29.399678 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:29 crc kubenswrapper[4705]: I0124 08:19:29.400281 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:29 crc kubenswrapper[4705]: I0124 08:19:29.450495 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:29 crc kubenswrapper[4705]: I0124 08:19:29.628847 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:30 crc kubenswrapper[4705]: I0124 08:19:30.869192 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tpdz"] Jan 24 08:19:31 crc kubenswrapper[4705]: I0124 08:19:31.598557 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8tpdz" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="registry-server" containerID="cri-o://2471b02c02252c1cb41cacc28511260cdea18541bfa5648b97b4c25cb953a3a7" gracePeriod=2 Jan 24 08:19:32 crc kubenswrapper[4705]: I0124 08:19:32.611535 4705 generic.go:334] "Generic (PLEG): container finished" podID="cedd79c0-cf51-4320-866a-92af2f646b13" containerID="2471b02c02252c1cb41cacc28511260cdea18541bfa5648b97b4c25cb953a3a7" exitCode=0 Jan 24 08:19:32 crc kubenswrapper[4705]: I0124 08:19:32.611634 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tpdz" event={"ID":"cedd79c0-cf51-4320-866a-92af2f646b13","Type":"ContainerDied","Data":"2471b02c02252c1cb41cacc28511260cdea18541bfa5648b97b4c25cb953a3a7"} Jan 24 08:19:32 crc kubenswrapper[4705]: I0124 08:19:32.907147 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.077876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-utilities\") pod \"cedd79c0-cf51-4320-866a-92af2f646b13\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.078044 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58bst\" (UniqueName: \"kubernetes.io/projected/cedd79c0-cf51-4320-866a-92af2f646b13-kube-api-access-58bst\") pod \"cedd79c0-cf51-4320-866a-92af2f646b13\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.078107 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-catalog-content\") pod \"cedd79c0-cf51-4320-866a-92af2f646b13\" (UID: \"cedd79c0-cf51-4320-866a-92af2f646b13\") " Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.079014 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-utilities" (OuterVolumeSpecName: "utilities") pod "cedd79c0-cf51-4320-866a-92af2f646b13" (UID: "cedd79c0-cf51-4320-866a-92af2f646b13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.083614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedd79c0-cf51-4320-866a-92af2f646b13-kube-api-access-58bst" (OuterVolumeSpecName: "kube-api-access-58bst") pod "cedd79c0-cf51-4320-866a-92af2f646b13" (UID: "cedd79c0-cf51-4320-866a-92af2f646b13"). InnerVolumeSpecName "kube-api-access-58bst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.141629 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cedd79c0-cf51-4320-866a-92af2f646b13" (UID: "cedd79c0-cf51-4320-866a-92af2f646b13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.180292 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.180322 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cedd79c0-cf51-4320-866a-92af2f646b13-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.180334 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58bst\" (UniqueName: \"kubernetes.io/projected/cedd79c0-cf51-4320-866a-92af2f646b13-kube-api-access-58bst\") on node \"crc\" DevicePath \"\"" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.623757 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tpdz" event={"ID":"cedd79c0-cf51-4320-866a-92af2f646b13","Type":"ContainerDied","Data":"f10b63245f0ae3667de7c80fd875399b36360d46c73bad88690a20af2888c44e"} Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.624167 4705 scope.go:117] "RemoveContainer" containerID="2471b02c02252c1cb41cacc28511260cdea18541bfa5648b97b4c25cb953a3a7" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.623813 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tpdz" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.656329 4705 scope.go:117] "RemoveContainer" containerID="209aa05e5a12e6e3fa10b470874bc6698381fe2fcd6f48037c77ea52a0aa0cab" Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.657844 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tpdz"] Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.667043 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8tpdz"] Jan 24 08:19:33 crc kubenswrapper[4705]: I0124 08:19:33.685674 4705 scope.go:117] "RemoveContainer" containerID="5270b4cef9f2f20dd1f8597ca32f0f2827abec5d8910fe2e69885fd63b1828fe" Jan 24 08:19:35 crc kubenswrapper[4705]: I0124 08:19:35.586815 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" path="/var/lib/kubelet/pods/cedd79c0-cf51-4320-866a-92af2f646b13/volumes" Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.071408 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.072123 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.072257 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.073133 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.073291 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" gracePeriod=600 Jan 24 08:19:37 crc kubenswrapper[4705]: E0124 08:19:37.203428 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.659323 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" exitCode=0 Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.659374 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f"} Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.659418 4705 scope.go:117] "RemoveContainer" containerID="e75dab5ce4ca72aff4b9b317d9abfccd45194eb72cb1bff0efc28fc55dba4e9f" Jan 24 08:19:37 crc kubenswrapper[4705]: I0124 08:19:37.660201 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:19:37 crc kubenswrapper[4705]: E0124 08:19:37.660544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:19:50 crc kubenswrapper[4705]: I0124 08:19:50.575881 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:19:50 crc kubenswrapper[4705]: E0124 08:19:50.576650 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:20:03 crc kubenswrapper[4705]: I0124 08:20:03.575714 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:20:03 crc kubenswrapper[4705]: E0124 08:20:03.577176 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:20:15 crc kubenswrapper[4705]: I0124 08:20:15.576073 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:20:15 crc kubenswrapper[4705]: E0124 08:20:15.576858 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:20:30 crc kubenswrapper[4705]: I0124 08:20:30.575523 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:20:30 crc kubenswrapper[4705]: E0124 08:20:30.576381 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:20:44 crc kubenswrapper[4705]: I0124 08:20:44.575688 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:20:44 crc kubenswrapper[4705]: E0124 08:20:44.576495 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:20:57 crc kubenswrapper[4705]: I0124 08:20:57.576496 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:20:57 crc kubenswrapper[4705]: E0124 08:20:57.577321 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:21:08 crc kubenswrapper[4705]: I0124 08:21:08.576098 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:21:08 crc kubenswrapper[4705]: E0124 08:21:08.577974 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:21:22 crc kubenswrapper[4705]: I0124 08:21:22.576254 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:21:22 crc kubenswrapper[4705]: E0124 08:21:22.577114 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:21:33 crc kubenswrapper[4705]: I0124 08:21:33.575957 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:21:33 crc kubenswrapper[4705]: E0124 08:21:33.576706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:21:44 crc kubenswrapper[4705]: I0124 08:21:44.575591 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:21:44 crc kubenswrapper[4705]: E0124 08:21:44.576311 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:21:55 crc kubenswrapper[4705]: I0124 08:21:55.577193 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:21:55 crc kubenswrapper[4705]: E0124 08:21:55.578552 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:22:08 crc kubenswrapper[4705]: I0124 08:22:08.576785 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:22:08 crc kubenswrapper[4705]: E0124 08:22:08.578071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:22:21 crc kubenswrapper[4705]: I0124 08:22:21.701750 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:22:21 crc kubenswrapper[4705]: E0124 08:22:21.702497 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:22:35 crc kubenswrapper[4705]: I0124 08:22:35.575895 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:22:35 crc kubenswrapper[4705]: E0124 08:22:35.576773 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:22:48 crc kubenswrapper[4705]: I0124 08:22:48.576396 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:22:48 crc kubenswrapper[4705]: E0124 08:22:48.577218 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:23:03 crc kubenswrapper[4705]: I0124 08:23:03.576153 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:23:03 crc kubenswrapper[4705]: E0124 08:23:03.577038 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:23:17 crc kubenswrapper[4705]: I0124 08:23:17.575776 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:23:17 crc kubenswrapper[4705]: E0124 08:23:17.576541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:23:28 crc kubenswrapper[4705]: I0124 08:23:28.575693 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:23:28 crc kubenswrapper[4705]: E0124 08:23:28.576504 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:23:42 crc kubenswrapper[4705]: I0124 08:23:42.575599 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:23:42 crc kubenswrapper[4705]: E0124 08:23:42.576465 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.587843 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m47d"] Jan 24 08:23:55 crc kubenswrapper[4705]: E0124 08:23:55.588709 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="extract-content" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.588724 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="extract-content" Jan 24 08:23:55 crc kubenswrapper[4705]: E0124 08:23:55.588747 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="extract-utilities" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.588753 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="extract-utilities" Jan 24 08:23:55 crc kubenswrapper[4705]: E0124 08:23:55.588794 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="registry-server" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.588799 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="registry-server" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.589026 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedd79c0-cf51-4320-866a-92af2f646b13" containerName="registry-server" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.590540 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.783088 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-utilities\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.783342 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhnm\" (UniqueName: \"kubernetes.io/projected/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-kube-api-access-7zhnm\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.783407 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-catalog-content\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.831750 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m47d"] Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.885125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-utilities\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.885316 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zhnm\" (UniqueName: \"kubernetes.io/projected/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-kube-api-access-7zhnm\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.885377 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-catalog-content\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.886102 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-catalog-content\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.887234 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-utilities\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.919686 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zhnm\" (UniqueName: \"kubernetes.io/projected/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-kube-api-access-7zhnm\") pod \"redhat-marketplace-7m47d\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:55 crc kubenswrapper[4705]: I0124 08:23:55.930397 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:23:56 crc kubenswrapper[4705]: I0124 08:23:56.585021 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m47d"] Jan 24 08:23:57 crc kubenswrapper[4705]: I0124 08:23:57.298220 4705 generic.go:334] "Generic (PLEG): container finished" podID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerID="10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60" exitCode=0 Jan 24 08:23:57 crc kubenswrapper[4705]: I0124 08:23:57.298299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m47d" event={"ID":"801cee15-62ce-4bb8-8147-1d7e2ec3e68c","Type":"ContainerDied","Data":"10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60"} Jan 24 08:23:57 crc kubenswrapper[4705]: I0124 08:23:57.298642 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m47d" event={"ID":"801cee15-62ce-4bb8-8147-1d7e2ec3e68c","Type":"ContainerStarted","Data":"0ccd8351e72cb7ccef0f26f91be53c2dd99239b2dc999fcb9e6ec5955b54977c"} Jan 24 08:23:57 crc kubenswrapper[4705]: I0124 08:23:57.300592 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:23:57 crc kubenswrapper[4705]: I0124 08:23:57.575941 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:23:57 crc kubenswrapper[4705]: E0124 08:23:57.576343 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:23:58 crc kubenswrapper[4705]: I0124 08:23:58.309262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m47d" event={"ID":"801cee15-62ce-4bb8-8147-1d7e2ec3e68c","Type":"ContainerStarted","Data":"457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae"} Jan 24 08:23:59 crc kubenswrapper[4705]: I0124 08:23:59.320057 4705 generic.go:334] "Generic (PLEG): container finished" podID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerID="457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae" exitCode=0 Jan 24 08:23:59 crc kubenswrapper[4705]: I0124 08:23:59.320113 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m47d" event={"ID":"801cee15-62ce-4bb8-8147-1d7e2ec3e68c","Type":"ContainerDied","Data":"457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae"} Jan 24 08:24:02 crc kubenswrapper[4705]: I0124 08:24:02.348356 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m47d" event={"ID":"801cee15-62ce-4bb8-8147-1d7e2ec3e68c","Type":"ContainerStarted","Data":"10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1"} Jan 24 08:24:02 crc kubenswrapper[4705]: I0124 08:24:02.369961 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m47d" podStartSLOduration=3.947536111 podStartE2EDuration="7.369932372s" podCreationTimestamp="2026-01-24 08:23:55 +0000 UTC" firstStartedPulling="2026-01-24 08:23:57.300260919 +0000 UTC m=+2576.020134207" lastFinishedPulling="2026-01-24 08:24:00.72265718 +0000 UTC m=+2579.442530468" observedRunningTime="2026-01-24 08:24:02.367376948 +0000 UTC m=+2581.087250246" watchObservedRunningTime="2026-01-24 08:24:02.369932372 +0000 UTC m=+2581.089805670" Jan 24 08:24:05 crc kubenswrapper[4705]: I0124 08:24:05.930933 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:24:05 crc kubenswrapper[4705]: I0124 08:24:05.931265 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:24:05 crc kubenswrapper[4705]: I0124 08:24:05.990956 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:24:06 crc kubenswrapper[4705]: I0124 08:24:06.436804 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:24:06 crc kubenswrapper[4705]: I0124 08:24:06.487399 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m47d"] Jan 24 08:24:08 crc kubenswrapper[4705]: I0124 08:24:08.402049 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m47d" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="registry-server" containerID="cri-o://10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1" gracePeriod=2 Jan 24 08:24:08 crc kubenswrapper[4705]: I0124 08:24:08.896128 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:24:08 crc kubenswrapper[4705]: I0124 08:24:08.991003 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zhnm\" (UniqueName: \"kubernetes.io/projected/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-kube-api-access-7zhnm\") pod \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " Jan 24 08:24:08 crc kubenswrapper[4705]: I0124 08:24:08.991136 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-catalog-content\") pod \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " Jan 24 08:24:08 crc kubenswrapper[4705]: I0124 08:24:08.991230 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-utilities\") pod \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\" (UID: \"801cee15-62ce-4bb8-8147-1d7e2ec3e68c\") " Jan 24 08:24:08 crc kubenswrapper[4705]: I0124 08:24:08.991944 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-utilities" (OuterVolumeSpecName: "utilities") pod "801cee15-62ce-4bb8-8147-1d7e2ec3e68c" (UID: "801cee15-62ce-4bb8-8147-1d7e2ec3e68c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:24:08 crc kubenswrapper[4705]: I0124 08:24:08.998632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-kube-api-access-7zhnm" (OuterVolumeSpecName: "kube-api-access-7zhnm") pod "801cee15-62ce-4bb8-8147-1d7e2ec3e68c" (UID: "801cee15-62ce-4bb8-8147-1d7e2ec3e68c"). InnerVolumeSpecName "kube-api-access-7zhnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.017280 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "801cee15-62ce-4bb8-8147-1d7e2ec3e68c" (UID: "801cee15-62ce-4bb8-8147-1d7e2ec3e68c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.093675 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zhnm\" (UniqueName: \"kubernetes.io/projected/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-kube-api-access-7zhnm\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.093720 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.093734 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801cee15-62ce-4bb8-8147-1d7e2ec3e68c-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.414081 4705 generic.go:334] "Generic (PLEG): container finished" podID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerID="10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1" exitCode=0 Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.414122 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m47d" event={"ID":"801cee15-62ce-4bb8-8147-1d7e2ec3e68c","Type":"ContainerDied","Data":"10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1"} Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.414153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m47d" event={"ID":"801cee15-62ce-4bb8-8147-1d7e2ec3e68c","Type":"ContainerDied","Data":"0ccd8351e72cb7ccef0f26f91be53c2dd99239b2dc999fcb9e6ec5955b54977c"} Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.414171 4705 scope.go:117] "RemoveContainer" containerID="10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.414314 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m47d" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.452755 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m47d"] Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.457380 4705 scope.go:117] "RemoveContainer" containerID="457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.461571 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m47d"] Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.476912 4705 scope.go:117] "RemoveContainer" containerID="10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.519259 4705 scope.go:117] "RemoveContainer" containerID="10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1" Jan 24 08:24:09 crc kubenswrapper[4705]: E0124 08:24:09.519891 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1\": container with ID starting with 10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1 not found: ID does not exist" containerID="10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.519947 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1"} err="failed to get container status \"10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1\": rpc error: code = NotFound desc = could not find container \"10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1\": container with ID starting with 10a3d1bba494e7e9fbdc8468f397d3d1f66f412a50abb3065118bd67e48736c1 not found: ID does not exist" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.519975 4705 scope.go:117] "RemoveContainer" containerID="457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae" Jan 24 08:24:09 crc kubenswrapper[4705]: E0124 08:24:09.520381 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae\": container with ID starting with 457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae not found: ID does not exist" containerID="457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.520427 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae"} err="failed to get container status \"457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae\": rpc error: code = NotFound desc = could not find container \"457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae\": container with ID starting with 457ffd853f1a2046a9f140b95e33ac4c74ac14b97772e9a5f95401382c1e40ae not found: ID does not exist" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.520455 4705 scope.go:117] "RemoveContainer" containerID="10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60" Jan 24 08:24:09 crc kubenswrapper[4705]: E0124 08:24:09.520799 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60\": container with ID starting with 10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60 not found: ID does not exist" containerID="10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.520843 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60"} err="failed to get container status \"10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60\": rpc error: code = NotFound desc = could not find container \"10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60\": container with ID starting with 10a0a470c20e6470e4dc032ce26e2bd1042a6f4f0d97d4cb2812f6ea3cc72b60 not found: ID does not exist" Jan 24 08:24:09 crc kubenswrapper[4705]: I0124 08:24:09.587922 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" path="/var/lib/kubelet/pods/801cee15-62ce-4bb8-8147-1d7e2ec3e68c/volumes" Jan 24 08:24:11 crc kubenswrapper[4705]: I0124 08:24:11.583450 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:24:11 crc kubenswrapper[4705]: E0124 08:24:11.584217 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:24:19 crc kubenswrapper[4705]: I0124 08:24:19.519875 4705 generic.go:334] "Generic (PLEG): container finished" podID="892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" containerID="42e5ba8f098443bbf4945bd1edffd0ad06e0bbdcf8cd04f2a94fbd5c26655240" exitCode=0 Jan 24 08:24:19 crc kubenswrapper[4705]: I0124 08:24:19.520071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" event={"ID":"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e","Type":"ContainerDied","Data":"42e5ba8f098443bbf4945bd1edffd0ad06e0bbdcf8cd04f2a94fbd5c26655240"} Jan 24 08:24:20 crc kubenswrapper[4705]: I0124 08:24:20.943351 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.097402 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89rfz\" (UniqueName: \"kubernetes.io/projected/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-kube-api-access-89rfz\") pod \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.097457 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-inventory\") pod \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.097680 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-combined-ca-bundle\") pod \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.097735 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-ssh-key-openstack-edpm-ipam\") pod \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.097765 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-secret-0\") pod \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\" (UID: \"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e\") " Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.173356 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-kube-api-access-89rfz" (OuterVolumeSpecName: "kube-api-access-89rfz") pod "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" (UID: "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e"). InnerVolumeSpecName "kube-api-access-89rfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.178794 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" (UID: "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.180747 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-inventory" (OuterVolumeSpecName: "inventory") pod "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" (UID: "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.183765 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" (UID: "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.194649 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" (UID: "892c0147-b4a3-451d-9c4c-c2a0cb3cf56e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.297381 4705 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.297449 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.297473 4705 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.297485 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89rfz\" (UniqueName: \"kubernetes.io/projected/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-kube-api-access-89rfz\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.297500 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/892c0147-b4a3-451d-9c4c-c2a0cb3cf56e-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.540945 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" event={"ID":"892c0147-b4a3-451d-9c4c-c2a0cb3cf56e","Type":"ContainerDied","Data":"b99143039fd40af314bac83ec510daa644f950a72fc96f442a427468abfeba71"} Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.540994 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99143039fd40af314bac83ec510daa644f950a72fc96f442a427468abfeba71" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.541043 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.635450 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql"] Jan 24 08:24:21 crc kubenswrapper[4705]: E0124 08:24:21.635993 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="registry-server" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.636018 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="registry-server" Jan 24 08:24:21 crc kubenswrapper[4705]: E0124 08:24:21.636051 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="extract-content" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.636060 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="extract-content" Jan 24 08:24:21 crc kubenswrapper[4705]: E0124 08:24:21.636082 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.636091 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 08:24:21 crc kubenswrapper[4705]: E0124 08:24:21.636100 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="extract-utilities" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.636107 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="extract-utilities" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.636389 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="892c0147-b4a3-451d-9c4c-c2a0cb3cf56e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.636414 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="801cee15-62ce-4bb8-8147-1d7e2ec3e68c" containerName="registry-server" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.637284 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.649597 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.649601 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.650081 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.650286 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.650676 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.653308 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql"] Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.653454 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.653459 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.809759 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.809809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.809857 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.809880 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpntl\" (UniqueName: \"kubernetes.io/projected/392633fe-e467-4669-9773-89b44ed68ac6-kube-api-access-xpntl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.810054 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.810101 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.810139 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392633fe-e467-4669-9773-89b44ed68ac6-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.810190 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.810278 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912193 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912247 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912348 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpntl\" (UniqueName: \"kubernetes.io/projected/392633fe-e467-4669-9773-89b44ed68ac6-kube-api-access-xpntl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912390 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.912498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392633fe-e467-4669-9773-89b44ed68ac6-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.913365 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392633fe-e467-4669-9773-89b44ed68ac6-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.917085 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.917883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.917956 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.921501 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.921500 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.923180 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.923724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:21 crc kubenswrapper[4705]: I0124 08:24:21.983797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpntl\" (UniqueName: \"kubernetes.io/projected/392633fe-e467-4669-9773-89b44ed68ac6-kube-api-access-xpntl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rmhql\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:22 crc kubenswrapper[4705]: I0124 08:24:22.260537 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:24:22 crc kubenswrapper[4705]: I0124 08:24:22.576554 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:24:22 crc kubenswrapper[4705]: E0124 08:24:22.577146 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:24:22 crc kubenswrapper[4705]: I0124 08:24:22.788500 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql"] Jan 24 08:24:23 crc kubenswrapper[4705]: I0124 08:24:23.561790 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" event={"ID":"392633fe-e467-4669-9773-89b44ed68ac6","Type":"ContainerStarted","Data":"aa73de2bb3056e6e44de5964abfe492f9723f913c6ca52b840fa4b11922f875a"} Jan 24 08:24:24 crc kubenswrapper[4705]: I0124 08:24:24.573305 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" event={"ID":"392633fe-e467-4669-9773-89b44ed68ac6","Type":"ContainerStarted","Data":"4616b3deb4971b89229708040610aecdafebc578eaa4cc80f723880ca53524f5"} Jan 24 08:24:24 crc kubenswrapper[4705]: I0124 08:24:24.593719 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" podStartSLOduration=3.013442132 podStartE2EDuration="3.593694893s" podCreationTimestamp="2026-01-24 08:24:21 +0000 UTC" firstStartedPulling="2026-01-24 08:24:22.79759783 +0000 UTC m=+2601.517471118" lastFinishedPulling="2026-01-24 08:24:23.377850601 +0000 UTC m=+2602.097723879" observedRunningTime="2026-01-24 08:24:24.590156122 +0000 UTC m=+2603.310029410" watchObservedRunningTime="2026-01-24 08:24:24.593694893 +0000 UTC m=+2603.313568181" Jan 24 08:24:36 crc kubenswrapper[4705]: I0124 08:24:36.575957 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:24:36 crc kubenswrapper[4705]: E0124 08:24:36.576784 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:24:49 crc kubenswrapper[4705]: I0124 08:24:49.575586 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:24:49 crc kubenswrapper[4705]: I0124 08:24:49.816041 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"8ff6b735ba6f0f350364f94fe8a29aa83a25781fc6a5acee75ef5ed63351363c"} Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.135891 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rfc24"] Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.139094 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.148902 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rfc24"] Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.207475 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-catalog-content\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.207661 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-utilities\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.207932 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf7wf\" (UniqueName: \"kubernetes.io/projected/ae82669e-95dd-4a26-b236-a9eec7d658f3-kube-api-access-lf7wf\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.310326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf7wf\" (UniqueName: \"kubernetes.io/projected/ae82669e-95dd-4a26-b236-a9eec7d658f3-kube-api-access-lf7wf\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.310404 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-catalog-content\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.310467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-utilities\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.310971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-catalog-content\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.311053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-utilities\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.331656 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf7wf\" (UniqueName: \"kubernetes.io/projected/ae82669e-95dd-4a26-b236-a9eec7d658f3-kube-api-access-lf7wf\") pod \"redhat-operators-rfc24\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:58 crc kubenswrapper[4705]: I0124 08:24:58.470621 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:24:59 crc kubenswrapper[4705]: I0124 08:24:59.112796 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rfc24"] Jan 24 08:24:59 crc kubenswrapper[4705]: I0124 08:24:59.907949 4705 generic.go:334] "Generic (PLEG): container finished" podID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerID="4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e" exitCode=0 Jan 24 08:24:59 crc kubenswrapper[4705]: I0124 08:24:59.908020 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfc24" event={"ID":"ae82669e-95dd-4a26-b236-a9eec7d658f3","Type":"ContainerDied","Data":"4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e"} Jan 24 08:24:59 crc kubenswrapper[4705]: I0124 08:24:59.908276 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfc24" event={"ID":"ae82669e-95dd-4a26-b236-a9eec7d658f3","Type":"ContainerStarted","Data":"397b132b2ef0c65d4389aa17b88975494b37270d8b4501849074c5f3f0d0346c"} Jan 24 08:25:00 crc kubenswrapper[4705]: I0124 08:25:00.917369 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfc24" event={"ID":"ae82669e-95dd-4a26-b236-a9eec7d658f3","Type":"ContainerStarted","Data":"f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09"} Jan 24 08:25:03 crc kubenswrapper[4705]: I0124 08:25:03.943903 4705 generic.go:334] "Generic (PLEG): container finished" podID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerID="f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09" exitCode=0 Jan 24 08:25:03 crc kubenswrapper[4705]: I0124 08:25:03.943965 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfc24" event={"ID":"ae82669e-95dd-4a26-b236-a9eec7d658f3","Type":"ContainerDied","Data":"f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09"} Jan 24 08:25:09 crc kubenswrapper[4705]: I0124 08:25:09.002057 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfc24" event={"ID":"ae82669e-95dd-4a26-b236-a9eec7d658f3","Type":"ContainerStarted","Data":"7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d"} Jan 24 08:25:09 crc kubenswrapper[4705]: I0124 08:25:09.038365 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rfc24" podStartSLOduration=2.262947737 podStartE2EDuration="11.038325605s" podCreationTimestamp="2026-01-24 08:24:58 +0000 UTC" firstStartedPulling="2026-01-24 08:24:59.909501816 +0000 UTC m=+2638.629375114" lastFinishedPulling="2026-01-24 08:25:08.684879694 +0000 UTC m=+2647.404752982" observedRunningTime="2026-01-24 08:25:09.027897347 +0000 UTC m=+2647.747770635" watchObservedRunningTime="2026-01-24 08:25:09.038325605 +0000 UTC m=+2647.758198893" Jan 24 08:25:18 crc kubenswrapper[4705]: I0124 08:25:18.470855 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:25:18 crc kubenswrapper[4705]: I0124 08:25:18.471501 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:25:18 crc kubenswrapper[4705]: I0124 08:25:18.526160 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:25:19 crc kubenswrapper[4705]: I0124 08:25:19.158484 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:25:19 crc kubenswrapper[4705]: I0124 08:25:19.210149 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rfc24"] Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.102297 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rfc24" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="registry-server" containerID="cri-o://7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d" gracePeriod=2 Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.554163 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.625595 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-utilities\") pod \"ae82669e-95dd-4a26-b236-a9eec7d658f3\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.625740 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf7wf\" (UniqueName: \"kubernetes.io/projected/ae82669e-95dd-4a26-b236-a9eec7d658f3-kube-api-access-lf7wf\") pod \"ae82669e-95dd-4a26-b236-a9eec7d658f3\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.625777 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-catalog-content\") pod \"ae82669e-95dd-4a26-b236-a9eec7d658f3\" (UID: \"ae82669e-95dd-4a26-b236-a9eec7d658f3\") " Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.627607 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-utilities" (OuterVolumeSpecName: "utilities") pod "ae82669e-95dd-4a26-b236-a9eec7d658f3" (UID: "ae82669e-95dd-4a26-b236-a9eec7d658f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.632536 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae82669e-95dd-4a26-b236-a9eec7d658f3-kube-api-access-lf7wf" (OuterVolumeSpecName: "kube-api-access-lf7wf") pod "ae82669e-95dd-4a26-b236-a9eec7d658f3" (UID: "ae82669e-95dd-4a26-b236-a9eec7d658f3"). InnerVolumeSpecName "kube-api-access-lf7wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.729083 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.729124 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf7wf\" (UniqueName: \"kubernetes.io/projected/ae82669e-95dd-4a26-b236-a9eec7d658f3-kube-api-access-lf7wf\") on node \"crc\" DevicePath \"\"" Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.749446 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae82669e-95dd-4a26-b236-a9eec7d658f3" (UID: "ae82669e-95dd-4a26-b236-a9eec7d658f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:25:21 crc kubenswrapper[4705]: I0124 08:25:21.833112 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae82669e-95dd-4a26-b236-a9eec7d658f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.113517 4705 generic.go:334] "Generic (PLEG): container finished" podID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerID="7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d" exitCode=0 Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.113581 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfc24" event={"ID":"ae82669e-95dd-4a26-b236-a9eec7d658f3","Type":"ContainerDied","Data":"7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d"} Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.113626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfc24" event={"ID":"ae82669e-95dd-4a26-b236-a9eec7d658f3","Type":"ContainerDied","Data":"397b132b2ef0c65d4389aa17b88975494b37270d8b4501849074c5f3f0d0346c"} Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.113636 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfc24" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.113649 4705 scope.go:117] "RemoveContainer" containerID="7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.148836 4705 scope.go:117] "RemoveContainer" containerID="f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.153210 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rfc24"] Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.214016 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rfc24"] Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.236794 4705 scope.go:117] "RemoveContainer" containerID="4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.275538 4705 scope.go:117] "RemoveContainer" containerID="7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d" Jan 24 08:25:22 crc kubenswrapper[4705]: E0124 08:25:22.276106 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d\": container with ID starting with 7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d not found: ID does not exist" containerID="7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.276233 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d"} err="failed to get container status \"7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d\": rpc error: code = NotFound desc = could not find container \"7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d\": container with ID starting with 7f0b997540b52a40af4c75fe5a144fe24227b4a803c7f9d24e8bb59bc053de7d not found: ID does not exist" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.276336 4705 scope.go:117] "RemoveContainer" containerID="f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09" Jan 24 08:25:22 crc kubenswrapper[4705]: E0124 08:25:22.276755 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09\": container with ID starting with f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09 not found: ID does not exist" containerID="f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.276905 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09"} err="failed to get container status \"f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09\": rpc error: code = NotFound desc = could not find container \"f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09\": container with ID starting with f428810190cca2893d61df948b70f456ff77953a133730a9d2691d8268862f09 not found: ID does not exist" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.277007 4705 scope.go:117] "RemoveContainer" containerID="4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e" Jan 24 08:25:22 crc kubenswrapper[4705]: E0124 08:25:22.277441 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e\": container with ID starting with 4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e not found: ID does not exist" containerID="4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e" Jan 24 08:25:22 crc kubenswrapper[4705]: I0124 08:25:22.277547 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e"} err="failed to get container status \"4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e\": rpc error: code = NotFound desc = could not find container \"4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e\": container with ID starting with 4833fe28cfedf10381c1fff1e1972d41df4c3e1e8022e495d6ceb7327706666e not found: ID does not exist" Jan 24 08:25:23 crc kubenswrapper[4705]: I0124 08:25:23.588533 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" path="/var/lib/kubelet/pods/ae82669e-95dd-4a26-b236-a9eec7d658f3/volumes" Jan 24 08:27:07 crc kubenswrapper[4705]: I0124 08:27:07.070983 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:27:07 crc kubenswrapper[4705]: I0124 08:27:07.071459 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:27:07 crc kubenswrapper[4705]: I0124 08:27:07.148067 4705 generic.go:334] "Generic (PLEG): container finished" podID="392633fe-e467-4669-9773-89b44ed68ac6" containerID="4616b3deb4971b89229708040610aecdafebc578eaa4cc80f723880ca53524f5" exitCode=0 Jan 24 08:27:07 crc kubenswrapper[4705]: I0124 08:27:07.148133 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" event={"ID":"392633fe-e467-4669-9773-89b44ed68ac6","Type":"ContainerDied","Data":"4616b3deb4971b89229708040610aecdafebc578eaa4cc80f723880ca53524f5"} Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.638780 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.796320 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-inventory\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.796382 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-0\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.796408 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-combined-ca-bundle\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.796512 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-1\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.796583 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-ssh-key-openstack-edpm-ipam\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.797301 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392633fe-e467-4669-9773-89b44ed68ac6-nova-extra-config-0\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.797408 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-1\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.797472 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-0\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.797539 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpntl\" (UniqueName: \"kubernetes.io/projected/392633fe-e467-4669-9773-89b44ed68ac6-kube-api-access-xpntl\") pod \"392633fe-e467-4669-9773-89b44ed68ac6\" (UID: \"392633fe-e467-4669-9773-89b44ed68ac6\") " Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.819698 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.819933 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392633fe-e467-4669-9773-89b44ed68ac6-kube-api-access-xpntl" (OuterVolumeSpecName: "kube-api-access-xpntl") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "kube-api-access-xpntl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.827121 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-inventory" (OuterVolumeSpecName: "inventory") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.828059 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.829458 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.831579 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.834265 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.836019 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.838801 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392633fe-e467-4669-9773-89b44ed68ac6-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "392633fe-e467-4669-9773-89b44ed68ac6" (UID: "392633fe-e467-4669-9773-89b44ed68ac6"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899070 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899114 4705 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392633fe-e467-4669-9773-89b44ed68ac6-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899123 4705 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899132 4705 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899143 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpntl\" (UniqueName: \"kubernetes.io/projected/392633fe-e467-4669-9773-89b44ed68ac6-kube-api-access-xpntl\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899157 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899168 4705 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899175 4705 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:08 crc kubenswrapper[4705]: I0124 08:27:08.899183 4705 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392633fe-e467-4669-9773-89b44ed68ac6-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.165679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" event={"ID":"392633fe-e467-4669-9773-89b44ed68ac6","Type":"ContainerDied","Data":"aa73de2bb3056e6e44de5964abfe492f9723f913c6ca52b840fa4b11922f875a"} Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.165990 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa73de2bb3056e6e44de5964abfe492f9723f913c6ca52b840fa4b11922f875a" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.165738 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rmhql" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.337499 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p"] Jan 24 08:27:09 crc kubenswrapper[4705]: E0124 08:27:09.338114 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="extract-utilities" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.338141 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="extract-utilities" Jan 24 08:27:09 crc kubenswrapper[4705]: E0124 08:27:09.338163 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="extract-content" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.338170 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="extract-content" Jan 24 08:27:09 crc kubenswrapper[4705]: E0124 08:27:09.338203 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392633fe-e467-4669-9773-89b44ed68ac6" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.338213 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="392633fe-e467-4669-9773-89b44ed68ac6" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 08:27:09 crc kubenswrapper[4705]: E0124 08:27:09.338237 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="registry-server" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.338244 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="registry-server" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.338498 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="392633fe-e467-4669-9773-89b44ed68ac6" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.338520 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae82669e-95dd-4a26-b236-a9eec7d658f3" containerName="registry-server" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.339339 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.342176 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.342384 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.342394 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.342478 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xchmd" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.344270 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.350908 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p"] Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.423282 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.423343 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.423387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.423443 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4w62\" (UniqueName: \"kubernetes.io/projected/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-kube-api-access-p4w62\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.423465 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.423513 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.423587 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.525971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.526063 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.526104 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.526165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.526191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4w62\" (UniqueName: \"kubernetes.io/projected/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-kube-api-access-p4w62\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.526216 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.526238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.531476 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.531474 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.542481 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.543412 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.543470 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.545363 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.547036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4w62\" (UniqueName: \"kubernetes.io/projected/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-kube-api-access-p4w62\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:09 crc kubenswrapper[4705]: I0124 08:27:09.658061 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:27:10 crc kubenswrapper[4705]: I0124 08:27:10.194222 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p"] Jan 24 08:27:11 crc kubenswrapper[4705]: I0124 08:27:11.183936 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" event={"ID":"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3","Type":"ContainerStarted","Data":"b74172c01b1121e370867adb52658f796b53210479a5b172bfc458aac1009e45"} Jan 24 08:27:11 crc kubenswrapper[4705]: I0124 08:27:11.184327 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" event={"ID":"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3","Type":"ContainerStarted","Data":"4cc680425600d6eb98e9de1c53d4149b5a92508b7e6fc5831d2eece8d314166f"} Jan 24 08:27:11 crc kubenswrapper[4705]: I0124 08:27:11.206248 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" podStartSLOduration=1.8017355720000001 podStartE2EDuration="2.206227584s" podCreationTimestamp="2026-01-24 08:27:09 +0000 UTC" firstStartedPulling="2026-01-24 08:27:10.196893107 +0000 UTC m=+2768.916766395" lastFinishedPulling="2026-01-24 08:27:10.601385129 +0000 UTC m=+2769.321258407" observedRunningTime="2026-01-24 08:27:11.202278121 +0000 UTC m=+2769.922151409" watchObservedRunningTime="2026-01-24 08:27:11.206227584 +0000 UTC m=+2769.926100872" Jan 24 08:27:37 crc kubenswrapper[4705]: I0124 08:27:37.071081 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:27:37 crc kubenswrapper[4705]: I0124 08:27:37.071646 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.071219 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.071815 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.071901 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.072733 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ff6b735ba6f0f350364f94fe8a29aa83a25781fc6a5acee75ef5ed63351363c"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.072791 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://8ff6b735ba6f0f350364f94fe8a29aa83a25781fc6a5acee75ef5ed63351363c" gracePeriod=600 Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.712252 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="8ff6b735ba6f0f350364f94fe8a29aa83a25781fc6a5acee75ef5ed63351363c" exitCode=0 Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.712336 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"8ff6b735ba6f0f350364f94fe8a29aa83a25781fc6a5acee75ef5ed63351363c"} Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.712932 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff"} Jan 24 08:28:07 crc kubenswrapper[4705]: I0124 08:28:07.712991 4705 scope.go:117] "RemoveContainer" containerID="53714c87ffcba39c1c632761c30500e9a1278467cd4dcc850f9c4d48f789126f" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.619193 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-srn4g"] Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.622096 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.632312 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-srn4g"] Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.800391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-utilities\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.800614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-catalog-content\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.800658 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f469\" (UniqueName: \"kubernetes.io/projected/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-kube-api-access-9f469\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.902567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-utilities\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.902644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-catalog-content\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.902666 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f469\" (UniqueName: \"kubernetes.io/projected/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-kube-api-access-9f469\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.903084 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-utilities\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.903179 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-catalog-content\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.921339 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f469\" (UniqueName: \"kubernetes.io/projected/7bd1c960-7a5d-48dd-bef7-2fd94acef48d-kube-api-access-9f469\") pod \"community-operators-srn4g\" (UID: \"7bd1c960-7a5d-48dd-bef7-2fd94acef48d\") " pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:18 crc kubenswrapper[4705]: I0124 08:28:18.945405 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:19 crc kubenswrapper[4705]: I0124 08:28:19.480391 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-srn4g"] Jan 24 08:28:19 crc kubenswrapper[4705]: I0124 08:28:19.823999 4705 generic.go:334] "Generic (PLEG): container finished" podID="7bd1c960-7a5d-48dd-bef7-2fd94acef48d" containerID="7ec41384dbcf4a9416768fb423ae8ccae0d7b841410d3a561add2fa4f93781f8" exitCode=0 Jan 24 08:28:19 crc kubenswrapper[4705]: I0124 08:28:19.824056 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srn4g" event={"ID":"7bd1c960-7a5d-48dd-bef7-2fd94acef48d","Type":"ContainerDied","Data":"7ec41384dbcf4a9416768fb423ae8ccae0d7b841410d3a561add2fa4f93781f8"} Jan 24 08:28:19 crc kubenswrapper[4705]: I0124 08:28:19.824090 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srn4g" event={"ID":"7bd1c960-7a5d-48dd-bef7-2fd94acef48d","Type":"ContainerStarted","Data":"a9364f23118c543aae811c2c7bca12839843ebab05e99f9950a7a5f5dd7fd8eb"} Jan 24 08:28:23 crc kubenswrapper[4705]: I0124 08:28:23.866426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srn4g" event={"ID":"7bd1c960-7a5d-48dd-bef7-2fd94acef48d","Type":"ContainerStarted","Data":"97aafc4023f70282e9069e1b1bc8dbcee695e55c6837ff816d27cc47dc13e337"} Jan 24 08:28:24 crc kubenswrapper[4705]: I0124 08:28:24.879880 4705 generic.go:334] "Generic (PLEG): container finished" podID="7bd1c960-7a5d-48dd-bef7-2fd94acef48d" containerID="97aafc4023f70282e9069e1b1bc8dbcee695e55c6837ff816d27cc47dc13e337" exitCode=0 Jan 24 08:28:24 crc kubenswrapper[4705]: I0124 08:28:24.879935 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srn4g" event={"ID":"7bd1c960-7a5d-48dd-bef7-2fd94acef48d","Type":"ContainerDied","Data":"97aafc4023f70282e9069e1b1bc8dbcee695e55c6837ff816d27cc47dc13e337"} Jan 24 08:28:27 crc kubenswrapper[4705]: I0124 08:28:27.908174 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srn4g" event={"ID":"7bd1c960-7a5d-48dd-bef7-2fd94acef48d","Type":"ContainerStarted","Data":"9c06f0b6683559496f85164421c93a62f4d9ac6835a004ef87d54ec4c8c09d81"} Jan 24 08:28:28 crc kubenswrapper[4705]: I0124 08:28:28.946436 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:28 crc kubenswrapper[4705]: I0124 08:28:28.946483 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:29 crc kubenswrapper[4705]: I0124 08:28:29.991946 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-srn4g" podUID="7bd1c960-7a5d-48dd-bef7-2fd94acef48d" containerName="registry-server" probeResult="failure" output=< Jan 24 08:28:29 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Jan 24 08:28:29 crc kubenswrapper[4705]: > Jan 24 08:28:38 crc kubenswrapper[4705]: I0124 08:28:38.999413 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:39 crc kubenswrapper[4705]: I0124 08:28:39.018155 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-srn4g" podStartSLOduration=13.980048384 podStartE2EDuration="21.018124479s" podCreationTimestamp="2026-01-24 08:28:18 +0000 UTC" firstStartedPulling="2026-01-24 08:28:19.82584036 +0000 UTC m=+2838.545713648" lastFinishedPulling="2026-01-24 08:28:26.863916455 +0000 UTC m=+2845.583789743" observedRunningTime="2026-01-24 08:28:27.960724704 +0000 UTC m=+2846.680598002" watchObservedRunningTime="2026-01-24 08:28:39.018124479 +0000 UTC m=+2857.737997767" Jan 24 08:28:39 crc kubenswrapper[4705]: I0124 08:28:39.057636 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-srn4g" Jan 24 08:28:39 crc kubenswrapper[4705]: I0124 08:28:39.152966 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-srn4g"] Jan 24 08:28:39 crc kubenswrapper[4705]: I0124 08:28:39.245033 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxk9r"] Jan 24 08:28:39 crc kubenswrapper[4705]: I0124 08:28:39.245414 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jxk9r" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="registry-server" containerID="cri-o://584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759" gracePeriod=2 Jan 24 08:28:39 crc kubenswrapper[4705]: I0124 08:28:39.908129 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.056416 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-utilities\") pod \"06f954e9-b37e-4822-b132-331764c6f9ac\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.056552 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-catalog-content\") pod \"06f954e9-b37e-4822-b132-331764c6f9ac\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.056583 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9rt7\" (UniqueName: \"kubernetes.io/projected/06f954e9-b37e-4822-b132-331764c6f9ac-kube-api-access-q9rt7\") pod \"06f954e9-b37e-4822-b132-331764c6f9ac\" (UID: \"06f954e9-b37e-4822-b132-331764c6f9ac\") " Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.057295 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-utilities" (OuterVolumeSpecName: "utilities") pod "06f954e9-b37e-4822-b132-331764c6f9ac" (UID: "06f954e9-b37e-4822-b132-331764c6f9ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.071060 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f954e9-b37e-4822-b132-331764c6f9ac-kube-api-access-q9rt7" (OuterVolumeSpecName: "kube-api-access-q9rt7") pod "06f954e9-b37e-4822-b132-331764c6f9ac" (UID: "06f954e9-b37e-4822-b132-331764c6f9ac"). InnerVolumeSpecName "kube-api-access-q9rt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.108092 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06f954e9-b37e-4822-b132-331764c6f9ac" (UID: "06f954e9-b37e-4822-b132-331764c6f9ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.113839 4705 generic.go:334] "Generic (PLEG): container finished" podID="06f954e9-b37e-4822-b132-331764c6f9ac" containerID="584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759" exitCode=0 Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.113928 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk9r" event={"ID":"06f954e9-b37e-4822-b132-331764c6f9ac","Type":"ContainerDied","Data":"584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759"} Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.113978 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk9r" event={"ID":"06f954e9-b37e-4822-b132-331764c6f9ac","Type":"ContainerDied","Data":"012fbaa5fc31a8dee83488f01b612eddd637d4adf985e4f880c02fd29f2acd59"} Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.113996 4705 scope.go:117] "RemoveContainer" containerID="584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.114481 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk9r" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.136128 4705 scope.go:117] "RemoveContainer" containerID="366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.159169 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.159490 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9rt7\" (UniqueName: \"kubernetes.io/projected/06f954e9-b37e-4822-b132-331764c6f9ac-kube-api-access-q9rt7\") on node \"crc\" DevicePath \"\"" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.159554 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f954e9-b37e-4822-b132-331764c6f9ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.160419 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxk9r"] Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.162927 4705 scope.go:117] "RemoveContainer" containerID="fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.172194 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jxk9r"] Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.228134 4705 scope.go:117] "RemoveContainer" containerID="584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759" Jan 24 08:28:40 crc kubenswrapper[4705]: E0124 08:28:40.228592 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759\": container with ID starting with 584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759 not found: ID does not exist" containerID="584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.228636 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759"} err="failed to get container status \"584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759\": rpc error: code = NotFound desc = could not find container \"584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759\": container with ID starting with 584805fc1d7ee53f965672f6d132beeabc6afdbecb492286ae09ebd524164759 not found: ID does not exist" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.228668 4705 scope.go:117] "RemoveContainer" containerID="366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f" Jan 24 08:28:40 crc kubenswrapper[4705]: E0124 08:28:40.228980 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f\": container with ID starting with 366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f not found: ID does not exist" containerID="366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.229042 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f"} err="failed to get container status \"366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f\": rpc error: code = NotFound desc = could not find container \"366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f\": container with ID starting with 366016949d9c12290319e61b8bdcf335a6b0d8545e2c36883df01fd6f7a5eb3f not found: ID does not exist" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.229077 4705 scope.go:117] "RemoveContainer" containerID="fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72" Jan 24 08:28:40 crc kubenswrapper[4705]: E0124 08:28:40.229542 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72\": container with ID starting with fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72 not found: ID does not exist" containerID="fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72" Jan 24 08:28:40 crc kubenswrapper[4705]: I0124 08:28:40.229583 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72"} err="failed to get container status \"fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72\": rpc error: code = NotFound desc = could not find container \"fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72\": container with ID starting with fd9303dfd67713a1810750cafb7514ba2f89f6d709d327110de020211c452d72 not found: ID does not exist" Jan 24 08:28:41 crc kubenswrapper[4705]: I0124 08:28:41.586641 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" path="/var/lib/kubelet/pods/06f954e9-b37e-4822-b132-331764c6f9ac/volumes" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.594339 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcfpv"] Jan 24 08:29:37 crc kubenswrapper[4705]: E0124 08:29:37.596794 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="extract-utilities" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.596923 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="extract-utilities" Jan 24 08:29:37 crc kubenswrapper[4705]: E0124 08:29:37.596999 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="extract-content" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.597058 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="extract-content" Jan 24 08:29:37 crc kubenswrapper[4705]: E0124 08:29:37.597141 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="registry-server" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.597215 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="registry-server" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.597563 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f954e9-b37e-4822-b132-331764c6f9ac" containerName="registry-server" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.599234 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.626081 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcfpv"] Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.740734 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-catalog-content\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.740939 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbd2j\" (UniqueName: \"kubernetes.io/projected/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-kube-api-access-pbd2j\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.740975 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-utilities\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.843292 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-catalog-content\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.843450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbd2j\" (UniqueName: \"kubernetes.io/projected/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-kube-api-access-pbd2j\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.843480 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-utilities\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.843873 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-catalog-content\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.843945 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-utilities\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.873073 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbd2j\" (UniqueName: \"kubernetes.io/projected/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-kube-api-access-pbd2j\") pod \"certified-operators-xcfpv\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:37 crc kubenswrapper[4705]: I0124 08:29:37.941902 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:38 crc kubenswrapper[4705]: I0124 08:29:38.568964 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcfpv"] Jan 24 08:29:39 crc kubenswrapper[4705]: I0124 08:29:39.044756 4705 generic.go:334] "Generic (PLEG): container finished" podID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerID="23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55" exitCode=0 Jan 24 08:29:39 crc kubenswrapper[4705]: I0124 08:29:39.044854 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcfpv" event={"ID":"ed0008dd-e714-4a0c-9821-7e9ba6a8d366","Type":"ContainerDied","Data":"23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55"} Jan 24 08:29:39 crc kubenswrapper[4705]: I0124 08:29:39.045080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcfpv" event={"ID":"ed0008dd-e714-4a0c-9821-7e9ba6a8d366","Type":"ContainerStarted","Data":"1ae16193379ead3ae07545f56a577463cfba74fbe79020608e76fa5a188a841e"} Jan 24 08:29:39 crc kubenswrapper[4705]: I0124 08:29:39.047419 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:29:40 crc kubenswrapper[4705]: I0124 08:29:40.056099 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcfpv" event={"ID":"ed0008dd-e714-4a0c-9821-7e9ba6a8d366","Type":"ContainerStarted","Data":"2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64"} Jan 24 08:29:41 crc kubenswrapper[4705]: I0124 08:29:41.066841 4705 generic.go:334] "Generic (PLEG): container finished" podID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerID="2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64" exitCode=0 Jan 24 08:29:41 crc kubenswrapper[4705]: I0124 08:29:41.066886 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcfpv" event={"ID":"ed0008dd-e714-4a0c-9821-7e9ba6a8d366","Type":"ContainerDied","Data":"2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64"} Jan 24 08:29:42 crc kubenswrapper[4705]: I0124 08:29:42.080277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcfpv" event={"ID":"ed0008dd-e714-4a0c-9821-7e9ba6a8d366","Type":"ContainerStarted","Data":"745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64"} Jan 24 08:29:42 crc kubenswrapper[4705]: I0124 08:29:42.100159 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcfpv" podStartSLOduration=2.651601614 podStartE2EDuration="5.100134762s" podCreationTimestamp="2026-01-24 08:29:37 +0000 UTC" firstStartedPulling="2026-01-24 08:29:39.047028259 +0000 UTC m=+2917.766901547" lastFinishedPulling="2026-01-24 08:29:41.495561407 +0000 UTC m=+2920.215434695" observedRunningTime="2026-01-24 08:29:42.097420804 +0000 UTC m=+2920.817294092" watchObservedRunningTime="2026-01-24 08:29:42.100134762 +0000 UTC m=+2920.820008050" Jan 24 08:29:47 crc kubenswrapper[4705]: I0124 08:29:47.942145 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:47 crc kubenswrapper[4705]: I0124 08:29:47.943843 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:47 crc kubenswrapper[4705]: I0124 08:29:47.990403 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:48 crc kubenswrapper[4705]: I0124 08:29:48.194966 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:48 crc kubenswrapper[4705]: I0124 08:29:48.275064 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcfpv"] Jan 24 08:29:50 crc kubenswrapper[4705]: I0124 08:29:50.166209 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcfpv" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="registry-server" containerID="cri-o://745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64" gracePeriod=2 Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.117512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.181901 4705 generic.go:334] "Generic (PLEG): container finished" podID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerID="745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64" exitCode=0 Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.181956 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcfpv" event={"ID":"ed0008dd-e714-4a0c-9821-7e9ba6a8d366","Type":"ContainerDied","Data":"745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64"} Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.181982 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcfpv" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.182004 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcfpv" event={"ID":"ed0008dd-e714-4a0c-9821-7e9ba6a8d366","Type":"ContainerDied","Data":"1ae16193379ead3ae07545f56a577463cfba74fbe79020608e76fa5a188a841e"} Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.182042 4705 scope.go:117] "RemoveContainer" containerID="745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.205041 4705 scope.go:117] "RemoveContainer" containerID="2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.225353 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbd2j\" (UniqueName: \"kubernetes.io/projected/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-kube-api-access-pbd2j\") pod \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.225505 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-catalog-content\") pod \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.226401 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-utilities\") pod \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\" (UID: \"ed0008dd-e714-4a0c-9821-7e9ba6a8d366\") " Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.227225 4705 scope.go:117] "RemoveContainer" containerID="23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.227445 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-utilities" (OuterVolumeSpecName: "utilities") pod "ed0008dd-e714-4a0c-9821-7e9ba6a8d366" (UID: "ed0008dd-e714-4a0c-9821-7e9ba6a8d366"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.231403 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-kube-api-access-pbd2j" (OuterVolumeSpecName: "kube-api-access-pbd2j") pod "ed0008dd-e714-4a0c-9821-7e9ba6a8d366" (UID: "ed0008dd-e714-4a0c-9821-7e9ba6a8d366"). InnerVolumeSpecName "kube-api-access-pbd2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.270510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed0008dd-e714-4a0c-9821-7e9ba6a8d366" (UID: "ed0008dd-e714-4a0c-9821-7e9ba6a8d366"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.319204 4705 scope.go:117] "RemoveContainer" containerID="745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64" Jan 24 08:29:51 crc kubenswrapper[4705]: E0124 08:29:51.319640 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64\": container with ID starting with 745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64 not found: ID does not exist" containerID="745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.319706 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64"} err="failed to get container status \"745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64\": rpc error: code = NotFound desc = could not find container \"745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64\": container with ID starting with 745edf113cf1e8364ebea4461b75fea4f538ca0b76dde0c2a32aaaa048930c64 not found: ID does not exist" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.319730 4705 scope.go:117] "RemoveContainer" containerID="2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64" Jan 24 08:29:51 crc kubenswrapper[4705]: E0124 08:29:51.320196 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64\": container with ID starting with 2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64 not found: ID does not exist" containerID="2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.320227 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64"} err="failed to get container status \"2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64\": rpc error: code = NotFound desc = could not find container \"2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64\": container with ID starting with 2bc38b93b11c46586ca795c5b4d922abbafd61cd8126b4a8a19857337e161b64 not found: ID does not exist" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.320242 4705 scope.go:117] "RemoveContainer" containerID="23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55" Jan 24 08:29:51 crc kubenswrapper[4705]: E0124 08:29:51.320760 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55\": container with ID starting with 23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55 not found: ID does not exist" containerID="23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.320864 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55"} err="failed to get container status \"23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55\": rpc error: code = NotFound desc = could not find container \"23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55\": container with ID starting with 23d573b4a9f5af4cd788e68900ceb760a9b29cc971101c4a0a099fc75e4daa55 not found: ID does not exist" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.329807 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.329873 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.329889 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbd2j\" (UniqueName: \"kubernetes.io/projected/ed0008dd-e714-4a0c-9821-7e9ba6a8d366-kube-api-access-pbd2j\") on node \"crc\" DevicePath \"\"" Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.514322 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcfpv"] Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.523240 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcfpv"] Jan 24 08:29:51 crc kubenswrapper[4705]: I0124 08:29:51.585679 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" path="/var/lib/kubelet/pods/ed0008dd-e714-4a0c-9821-7e9ba6a8d366/volumes" Jan 24 08:29:58 crc kubenswrapper[4705]: I0124 08:29:58.372709 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58599c4547-sbsm4" podUID="1cce5e47-bb96-4468-8818-29869d013b7b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.147114 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc"] Jan 24 08:30:00 crc kubenswrapper[4705]: E0124 08:30:00.148048 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="extract-content" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.148071 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="extract-content" Jan 24 08:30:00 crc kubenswrapper[4705]: E0124 08:30:00.148098 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="registry-server" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.148106 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="registry-server" Jan 24 08:30:00 crc kubenswrapper[4705]: E0124 08:30:00.148133 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="extract-utilities" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.148142 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="extract-utilities" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.148409 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0008dd-e714-4a0c-9821-7e9ba6a8d366" containerName="registry-server" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.149481 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.152796 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.153202 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.173317 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc"] Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.317950 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9609b625-38c5-4306-97ba-d3870b4cabb4-config-volume\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.318659 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9609b625-38c5-4306-97ba-d3870b4cabb4-secret-volume\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.319012 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qhkr\" (UniqueName: \"kubernetes.io/projected/9609b625-38c5-4306-97ba-d3870b4cabb4-kube-api-access-7qhkr\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.421300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qhkr\" (UniqueName: \"kubernetes.io/projected/9609b625-38c5-4306-97ba-d3870b4cabb4-kube-api-access-7qhkr\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.421441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9609b625-38c5-4306-97ba-d3870b4cabb4-config-volume\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.421596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9609b625-38c5-4306-97ba-d3870b4cabb4-secret-volume\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.422456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9609b625-38c5-4306-97ba-d3870b4cabb4-config-volume\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.438483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9609b625-38c5-4306-97ba-d3870b4cabb4-secret-volume\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.447602 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qhkr\" (UniqueName: \"kubernetes.io/projected/9609b625-38c5-4306-97ba-d3870b4cabb4-kube-api-access-7qhkr\") pod \"collect-profiles-29487390-v4wmc\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.488063 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:00 crc kubenswrapper[4705]: I0124 08:30:00.935603 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc"] Jan 24 08:30:01 crc kubenswrapper[4705]: I0124 08:30:01.275186 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" event={"ID":"9609b625-38c5-4306-97ba-d3870b4cabb4","Type":"ContainerStarted","Data":"a26a11f0fb2402eb572288bd28c54f0103d634c47a5c2af3b72739b3190cbd9e"} Jan 24 08:30:01 crc kubenswrapper[4705]: I0124 08:30:01.275740 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" event={"ID":"9609b625-38c5-4306-97ba-d3870b4cabb4","Type":"ContainerStarted","Data":"f563e86cae3f22669cbf533e0900ebef8d92ffc52e3feadb7af3d0389e04db41"} Jan 24 08:30:01 crc kubenswrapper[4705]: I0124 08:30:01.312726 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" podStartSLOduration=1.312704335 podStartE2EDuration="1.312704335s" podCreationTimestamp="2026-01-24 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:30:01.308168255 +0000 UTC m=+2940.028041573" watchObservedRunningTime="2026-01-24 08:30:01.312704335 +0000 UTC m=+2940.032577623" Jan 24 08:30:02 crc kubenswrapper[4705]: I0124 08:30:02.285515 4705 generic.go:334] "Generic (PLEG): container finished" podID="9609b625-38c5-4306-97ba-d3870b4cabb4" containerID="a26a11f0fb2402eb572288bd28c54f0103d634c47a5c2af3b72739b3190cbd9e" exitCode=0 Jan 24 08:30:02 crc kubenswrapper[4705]: I0124 08:30:02.285907 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" event={"ID":"9609b625-38c5-4306-97ba-d3870b4cabb4","Type":"ContainerDied","Data":"a26a11f0fb2402eb572288bd28c54f0103d634c47a5c2af3b72739b3190cbd9e"} Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.732540 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.892204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9609b625-38c5-4306-97ba-d3870b4cabb4-secret-volume\") pod \"9609b625-38c5-4306-97ba-d3870b4cabb4\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.892859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qhkr\" (UniqueName: \"kubernetes.io/projected/9609b625-38c5-4306-97ba-d3870b4cabb4-kube-api-access-7qhkr\") pod \"9609b625-38c5-4306-97ba-d3870b4cabb4\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.893552 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9609b625-38c5-4306-97ba-d3870b4cabb4-config-volume\") pod \"9609b625-38c5-4306-97ba-d3870b4cabb4\" (UID: \"9609b625-38c5-4306-97ba-d3870b4cabb4\") " Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.893915 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9609b625-38c5-4306-97ba-d3870b4cabb4-config-volume" (OuterVolumeSpecName: "config-volume") pod "9609b625-38c5-4306-97ba-d3870b4cabb4" (UID: "9609b625-38c5-4306-97ba-d3870b4cabb4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.894446 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9609b625-38c5-4306-97ba-d3870b4cabb4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.898779 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9609b625-38c5-4306-97ba-d3870b4cabb4-kube-api-access-7qhkr" (OuterVolumeSpecName: "kube-api-access-7qhkr") pod "9609b625-38c5-4306-97ba-d3870b4cabb4" (UID: "9609b625-38c5-4306-97ba-d3870b4cabb4"). InnerVolumeSpecName "kube-api-access-7qhkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.904169 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9609b625-38c5-4306-97ba-d3870b4cabb4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9609b625-38c5-4306-97ba-d3870b4cabb4" (UID: "9609b625-38c5-4306-97ba-d3870b4cabb4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.997683 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qhkr\" (UniqueName: \"kubernetes.io/projected/9609b625-38c5-4306-97ba-d3870b4cabb4-kube-api-access-7qhkr\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:03 crc kubenswrapper[4705]: I0124 08:30:03.997772 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9609b625-38c5-4306-97ba-d3870b4cabb4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:04 crc kubenswrapper[4705]: I0124 08:30:04.303312 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" event={"ID":"9609b625-38c5-4306-97ba-d3870b4cabb4","Type":"ContainerDied","Data":"f563e86cae3f22669cbf533e0900ebef8d92ffc52e3feadb7af3d0389e04db41"} Jan 24 08:30:04 crc kubenswrapper[4705]: I0124 08:30:04.303348 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f563e86cae3f22669cbf533e0900ebef8d92ffc52e3feadb7af3d0389e04db41" Jan 24 08:30:04 crc kubenswrapper[4705]: I0124 08:30:04.303368 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487390-v4wmc" Jan 24 08:30:04 crc kubenswrapper[4705]: I0124 08:30:04.809608 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m"] Jan 24 08:30:04 crc kubenswrapper[4705]: I0124 08:30:04.817664 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-8229m"] Jan 24 08:30:05 crc kubenswrapper[4705]: I0124 08:30:05.588395 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc53e87-c43b-49ce-adf1-030634af0ad2" path="/var/lib/kubelet/pods/2dc53e87-c43b-49ce-adf1-030634af0ad2/volumes" Jan 24 08:30:07 crc kubenswrapper[4705]: I0124 08:30:07.071282 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:30:07 crc kubenswrapper[4705]: I0124 08:30:07.071809 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:30:07 crc kubenswrapper[4705]: I0124 08:30:07.299762 4705 scope.go:117] "RemoveContainer" containerID="72d39d9f81fa8dd014c45af7794cc09ffdaddd67b76bf54199559b68edebe2f9" Jan 24 08:30:08 crc kubenswrapper[4705]: I0124 08:30:08.338169 4705 generic.go:334] "Generic (PLEG): container finished" podID="ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" containerID="b74172c01b1121e370867adb52658f796b53210479a5b172bfc458aac1009e45" exitCode=0 Jan 24 08:30:08 crc kubenswrapper[4705]: I0124 08:30:08.338223 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" event={"ID":"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3","Type":"ContainerDied","Data":"b74172c01b1121e370867adb52658f796b53210479a5b172bfc458aac1009e45"} Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.751135 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.913886 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-2\") pod \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.914000 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-telemetry-combined-ca-bundle\") pod \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.914031 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4w62\" (UniqueName: \"kubernetes.io/projected/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-kube-api-access-p4w62\") pod \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.914065 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-1\") pod \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.914108 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ssh-key-openstack-edpm-ipam\") pod \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.914186 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-0\") pod \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.914258 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-inventory\") pod \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\" (UID: \"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3\") " Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.919796 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" (UID: "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.920357 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-kube-api-access-p4w62" (OuterVolumeSpecName: "kube-api-access-p4w62") pod "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" (UID: "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3"). InnerVolumeSpecName "kube-api-access-p4w62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.945740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-inventory" (OuterVolumeSpecName: "inventory") pod "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" (UID: "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.946096 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" (UID: "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.948021 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" (UID: "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.948551 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" (UID: "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:30:09 crc kubenswrapper[4705]: I0124 08:30:09.954002 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" (UID: "ca587b10-b782-4dd1-a3fa-e9dfd773a2e3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.017154 4705 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.017246 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4w62\" (UniqueName: \"kubernetes.io/projected/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-kube-api-access-p4w62\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.017266 4705 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.017307 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.017320 4705 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.017333 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.017346 4705 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ca587b10-b782-4dd1-a3fa-e9dfd773a2e3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.379067 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" event={"ID":"ca587b10-b782-4dd1-a3fa-e9dfd773a2e3","Type":"ContainerDied","Data":"4cc680425600d6eb98e9de1c53d4149b5a92508b7e6fc5831d2eece8d314166f"} Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.379111 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc680425600d6eb98e9de1c53d4149b5a92508b7e6fc5831d2eece8d314166f" Jan 24 08:30:10 crc kubenswrapper[4705]: I0124 08:30:10.379139 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p" Jan 24 08:30:37 crc kubenswrapper[4705]: I0124 08:30:37.071512 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:30:37 crc kubenswrapper[4705]: I0124 08:30:37.072100 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.071198 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.071727 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.071857 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.072677 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.072730 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" gracePeriod=600 Jan 24 08:31:07 crc kubenswrapper[4705]: E0124 08:31:07.272417 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:31:07 crc kubenswrapper[4705]: E0124 08:31:07.336084 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7b3b969_5164_4f10_8758_72b7e2f4b762.slice/crio-conmon-28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7b3b969_5164_4f10_8758_72b7e2f4b762.slice/crio-28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff.scope\": RecentStats: unable to find data in memory cache]" Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.946860 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" exitCode=0 Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.946954 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff"} Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.947241 4705 scope.go:117] "RemoveContainer" containerID="8ff6b735ba6f0f350364f94fe8a29aa83a25781fc6a5acee75ef5ed63351363c" Jan 24 08:31:07 crc kubenswrapper[4705]: I0124 08:31:07.947751 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:31:07 crc kubenswrapper[4705]: E0124 08:31:07.949713 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:31:19 crc kubenswrapper[4705]: I0124 08:31:19.575777 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:31:19 crc kubenswrapper[4705]: E0124 08:31:19.576557 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:31:34 crc kubenswrapper[4705]: I0124 08:31:34.576440 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:31:34 crc kubenswrapper[4705]: E0124 08:31:34.578786 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:31:48 crc kubenswrapper[4705]: I0124 08:31:48.576072 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:31:48 crc kubenswrapper[4705]: E0124 08:31:48.576891 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:32:01 crc kubenswrapper[4705]: I0124 08:32:01.586606 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:32:01 crc kubenswrapper[4705]: E0124 08:32:01.587427 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:32:13 crc kubenswrapper[4705]: I0124 08:32:13.576120 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:32:13 crc kubenswrapper[4705]: E0124 08:32:13.577052 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:32:25 crc kubenswrapper[4705]: I0124 08:32:25.576882 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:32:25 crc kubenswrapper[4705]: E0124 08:32:25.577528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:32:39 crc kubenswrapper[4705]: I0124 08:32:39.576585 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:32:39 crc kubenswrapper[4705]: E0124 08:32:39.577738 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:32:50 crc kubenswrapper[4705]: I0124 08:32:50.575951 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:32:50 crc kubenswrapper[4705]: E0124 08:32:50.576792 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:32:58 crc kubenswrapper[4705]: I0124 08:32:58.375433 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.536693 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.537340 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="b84bd122-47ef-448f-914d-c65c554fa7c1" containerName="openstackclient" containerID="cri-o://e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9" gracePeriod=2 Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.549224 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.588353 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: E0124 08:33:00.588900 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9609b625-38c5-4306-97ba-d3870b4cabb4" containerName="collect-profiles" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.588928 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9609b625-38c5-4306-97ba-d3870b4cabb4" containerName="collect-profiles" Jan 24 08:33:00 crc kubenswrapper[4705]: E0124 08:33:00.588946 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.588959 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 08:33:00 crc kubenswrapper[4705]: E0124 08:33:00.588990 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84bd122-47ef-448f-914d-c65c554fa7c1" containerName="openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.588998 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84bd122-47ef-448f-914d-c65c554fa7c1" containerName="openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.589253 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca587b10-b782-4dd1-a3fa-e9dfd773a2e3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.589286 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84bd122-47ef-448f-914d-c65c554fa7c1" containerName="openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.589302 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9609b625-38c5-4306-97ba-d3870b4cabb4" containerName="collect-profiles" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.590118 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.594992 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b84bd122-47ef-448f-914d-c65c554fa7c1" podUID="b92c4257-e428-48ba-b079-9fb9d3453b90" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.621573 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.647797 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: E0124 08:33:00.648658 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-ljw9s openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="b92c4257-e428-48ba-b079-9fb9d3453b90" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.653607 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljw9s\" (UniqueName: \"kubernetes.io/projected/b92c4257-e428-48ba-b079-9fb9d3453b90-kube-api-access-ljw9s\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.653735 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.653815 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.653965 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config-secret\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.659524 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.680883 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.683368 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.692647 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.758583 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.758666 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.758721 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config-secret\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.758770 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config-secret\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.758992 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.759318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljw9s\" (UniqueName: \"kubernetes.io/projected/b92c4257-e428-48ba-b079-9fb9d3453b90-kube-api-access-ljw9s\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.759423 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb27j\" (UniqueName: \"kubernetes.io/projected/fed21d33-a27e-43a4-b5aa-7d3c25375467-kube-api-access-mb27j\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.759560 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.760376 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: E0124 08:33:00.761708 4705 projected.go:194] Error preparing data for projected volume kube-api-access-ljw9s for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (b92c4257-e428-48ba-b079-9fb9d3453b90) does not match the UID in record. The object might have been deleted and then recreated Jan 24 08:33:00 crc kubenswrapper[4705]: E0124 08:33:00.761791 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b92c4257-e428-48ba-b079-9fb9d3453b90-kube-api-access-ljw9s podName:b92c4257-e428-48ba-b079-9fb9d3453b90 nodeName:}" failed. No retries permitted until 2026-01-24 08:33:01.261758642 +0000 UTC m=+3119.981631930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ljw9s" (UniqueName: "kubernetes.io/projected/b92c4257-e428-48ba-b079-9fb9d3453b90-kube-api-access-ljw9s") pod "openstackclient" (UID: "b92c4257-e428-48ba-b079-9fb9d3453b90") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (b92c4257-e428-48ba-b079-9fb9d3453b90) does not match the UID in record. The object might have been deleted and then recreated Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.765364 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config-secret\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.765481 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.861444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.861856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config-secret\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.861946 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.862061 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb27j\" (UniqueName: \"kubernetes.io/projected/fed21d33-a27e-43a4-b5aa-7d3c25375467-kube-api-access-mb27j\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.862997 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.865624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config-secret\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.865731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:00 crc kubenswrapper[4705]: I0124 08:33:00.878791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb27j\" (UniqueName: \"kubernetes.io/projected/fed21d33-a27e-43a4-b5aa-7d3c25375467-kube-api-access-mb27j\") pod \"openstackclient\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " pod="openstack/openstackclient" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.007474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.142377 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.150198 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b92c4257-e428-48ba-b079-9fb9d3453b90" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.240164 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.271018 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config-secret\") pod \"b92c4257-e428-48ba-b079-9fb9d3453b90\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.271075 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-combined-ca-bundle\") pod \"b92c4257-e428-48ba-b079-9fb9d3453b90\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.271346 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config\") pod \"b92c4257-e428-48ba-b079-9fb9d3453b90\" (UID: \"b92c4257-e428-48ba-b079-9fb9d3453b90\") " Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.271805 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljw9s\" (UniqueName: \"kubernetes.io/projected/b92c4257-e428-48ba-b079-9fb9d3453b90-kube-api-access-ljw9s\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.272447 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b92c4257-e428-48ba-b079-9fb9d3453b90" (UID: "b92c4257-e428-48ba-b079-9fb9d3453b90"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.277164 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b92c4257-e428-48ba-b079-9fb9d3453b90" (UID: "b92c4257-e428-48ba-b079-9fb9d3453b90"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.277943 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b92c4257-e428-48ba-b079-9fb9d3453b90" (UID: "b92c4257-e428-48ba-b079-9fb9d3453b90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.376957 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.376999 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c4257-e428-48ba-b079-9fb9d3453b90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.377011 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b92c4257-e428-48ba-b079-9fb9d3453b90-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.584748 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:33:01 crc kubenswrapper[4705]: E0124 08:33:01.585100 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.592671 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92c4257-e428-48ba-b079-9fb9d3453b90" path="/var/lib/kubelet/pods/b92c4257-e428-48ba-b079-9fb9d3453b90/volumes" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.601401 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.826699 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-x52km"] Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.830690 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x52km" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.860576 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-a922-account-create-update-pnhgf"] Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.861762 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-x52km"] Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.861858 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.885865 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.889973 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwps\" (UniqueName: \"kubernetes.io/projected/0c82ed6b-9321-49fa-a79c-76c390cb0d50-kube-api-access-7fwps\") pod \"aodh-a922-account-create-update-pnhgf\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.890026 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wl5\" (UniqueName: \"kubernetes.io/projected/8f498e1a-b565-40e1-96ae-6af81995e5d9-kube-api-access-s8wl5\") pod \"aodh-db-create-x52km\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " pod="openstack/aodh-db-create-x52km" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.890094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c82ed6b-9321-49fa-a79c-76c390cb0d50-operator-scripts\") pod \"aodh-a922-account-create-update-pnhgf\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.890202 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f498e1a-b565-40e1-96ae-6af81995e5d9-operator-scripts\") pod \"aodh-db-create-x52km\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " pod="openstack/aodh-db-create-x52km" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.895862 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a922-account-create-update-pnhgf"] Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.991549 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f498e1a-b565-40e1-96ae-6af81995e5d9-operator-scripts\") pod \"aodh-db-create-x52km\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " pod="openstack/aodh-db-create-x52km" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.991657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fwps\" (UniqueName: \"kubernetes.io/projected/0c82ed6b-9321-49fa-a79c-76c390cb0d50-kube-api-access-7fwps\") pod \"aodh-a922-account-create-update-pnhgf\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.991684 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wl5\" (UniqueName: \"kubernetes.io/projected/8f498e1a-b565-40e1-96ae-6af81995e5d9-kube-api-access-s8wl5\") pod \"aodh-db-create-x52km\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " pod="openstack/aodh-db-create-x52km" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.991712 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c82ed6b-9321-49fa-a79c-76c390cb0d50-operator-scripts\") pod \"aodh-a922-account-create-update-pnhgf\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.992572 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c82ed6b-9321-49fa-a79c-76c390cb0d50-operator-scripts\") pod \"aodh-a922-account-create-update-pnhgf\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:01 crc kubenswrapper[4705]: I0124 08:33:01.992592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f498e1a-b565-40e1-96ae-6af81995e5d9-operator-scripts\") pod \"aodh-db-create-x52km\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " pod="openstack/aodh-db-create-x52km" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.008370 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wl5\" (UniqueName: \"kubernetes.io/projected/8f498e1a-b565-40e1-96ae-6af81995e5d9-kube-api-access-s8wl5\") pod \"aodh-db-create-x52km\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " pod="openstack/aodh-db-create-x52km" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.010023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fwps\" (UniqueName: \"kubernetes.io/projected/0c82ed6b-9321-49fa-a79c-76c390cb0d50-kube-api-access-7fwps\") pod \"aodh-a922-account-create-update-pnhgf\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.155763 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fed21d33-a27e-43a4-b5aa-7d3c25375467","Type":"ContainerStarted","Data":"19b1e4f3352b3c97e8b170f878cdffd963cef92743f71a3b49da9139d2ac4da0"} Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.155846 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fed21d33-a27e-43a4-b5aa-7d3c25375467","Type":"ContainerStarted","Data":"03eef65d201fb1fe835f9d020943af7934ec7b4cf449aae8910425e078854e59"} Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.155840 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.306679 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.307858 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x52km" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.326390 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.326369305 podStartE2EDuration="2.326369305s" podCreationTimestamp="2026-01-24 08:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:33:02.323426159 +0000 UTC m=+3121.043299447" watchObservedRunningTime="2026-01-24 08:33:02.326369305 +0000 UTC m=+3121.046242593" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.326788 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b92c4257-e428-48ba-b079-9fb9d3453b90" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.916188 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-x52km"] Jan 24 08:33:02 crc kubenswrapper[4705]: W0124 08:33:02.936079 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c82ed6b_9321_49fa_a79c_76c390cb0d50.slice/crio-d9606a322402a49a25441915fc9803f31b1956dfef5e09bf6c0cb4c958c76f8b WatchSource:0}: Error finding container d9606a322402a49a25441915fc9803f31b1956dfef5e09bf6c0cb4c958c76f8b: Status 404 returned error can't find the container with id d9606a322402a49a25441915fc9803f31b1956dfef5e09bf6c0cb4c958c76f8b Jan 24 08:33:02 crc kubenswrapper[4705]: I0124 08:33:02.946234 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a922-account-create-update-pnhgf"] Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.091685 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.095193 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b84bd122-47ef-448f-914d-c65c554fa7c1" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.167457 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a922-account-create-update-pnhgf" event={"ID":"0c82ed6b-9321-49fa-a79c-76c390cb0d50","Type":"ContainerStarted","Data":"d9606a322402a49a25441915fc9803f31b1956dfef5e09bf6c0cb4c958c76f8b"} Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.169772 4705 generic.go:334] "Generic (PLEG): container finished" podID="b84bd122-47ef-448f-914d-c65c554fa7c1" containerID="e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9" exitCode=137 Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.169857 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.169884 4705 scope.go:117] "RemoveContainer" containerID="e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.170879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-x52km" event={"ID":"8f498e1a-b565-40e1-96ae-6af81995e5d9","Type":"ContainerStarted","Data":"cf25f2dcf7c63e94050e754227513af371795ac9b31f7981fb8b0167de317ae2"} Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.173942 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b84bd122-47ef-448f-914d-c65c554fa7c1" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.193201 4705 scope.go:117] "RemoveContainer" containerID="e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9" Jan 24 08:33:03 crc kubenswrapper[4705]: E0124 08:33:03.193664 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9\": container with ID starting with e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9 not found: ID does not exist" containerID="e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.193710 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9"} err="failed to get container status \"e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9\": rpc error: code = NotFound desc = could not find container \"e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9\": container with ID starting with e2f51c5472d07b377d08fb56c4c54925392682654e4a391e6d6c605ece5a52f9 not found: ID does not exist" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.260901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-combined-ca-bundle\") pod \"b84bd122-47ef-448f-914d-c65c554fa7c1\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.261068 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d722r\" (UniqueName: \"kubernetes.io/projected/b84bd122-47ef-448f-914d-c65c554fa7c1-kube-api-access-d722r\") pod \"b84bd122-47ef-448f-914d-c65c554fa7c1\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.261212 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config\") pod \"b84bd122-47ef-448f-914d-c65c554fa7c1\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.261303 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config-secret\") pod \"b84bd122-47ef-448f-914d-c65c554fa7c1\" (UID: \"b84bd122-47ef-448f-914d-c65c554fa7c1\") " Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.268951 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84bd122-47ef-448f-914d-c65c554fa7c1-kube-api-access-d722r" (OuterVolumeSpecName: "kube-api-access-d722r") pod "b84bd122-47ef-448f-914d-c65c554fa7c1" (UID: "b84bd122-47ef-448f-914d-c65c554fa7c1"). InnerVolumeSpecName "kube-api-access-d722r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.300410 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b84bd122-47ef-448f-914d-c65c554fa7c1" (UID: "b84bd122-47ef-448f-914d-c65c554fa7c1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.301527 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b84bd122-47ef-448f-914d-c65c554fa7c1" (UID: "b84bd122-47ef-448f-914d-c65c554fa7c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.336650 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b84bd122-47ef-448f-914d-c65c554fa7c1" (UID: "b84bd122-47ef-448f-914d-c65c554fa7c1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.364790 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d722r\" (UniqueName: \"kubernetes.io/projected/b84bd122-47ef-448f-914d-c65c554fa7c1-kube-api-access-d722r\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.364956 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.364977 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.364989 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84bd122-47ef-448f-914d-c65c554fa7c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.504887 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b84bd122-47ef-448f-914d-c65c554fa7c1" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" Jan 24 08:33:03 crc kubenswrapper[4705]: I0124 08:33:03.588239 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b84bd122-47ef-448f-914d-c65c554fa7c1" path="/var/lib/kubelet/pods/b84bd122-47ef-448f-914d-c65c554fa7c1/volumes" Jan 24 08:33:04 crc kubenswrapper[4705]: I0124 08:33:04.179638 4705 generic.go:334] "Generic (PLEG): container finished" podID="0c82ed6b-9321-49fa-a79c-76c390cb0d50" containerID="a1c006f9b183a3d169931b64f59da04d839165f1f12c0eacf5cba61ecd020c61" exitCode=0 Jan 24 08:33:04 crc kubenswrapper[4705]: I0124 08:33:04.179746 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a922-account-create-update-pnhgf" event={"ID":"0c82ed6b-9321-49fa-a79c-76c390cb0d50","Type":"ContainerDied","Data":"a1c006f9b183a3d169931b64f59da04d839165f1f12c0eacf5cba61ecd020c61"} Jan 24 08:33:04 crc kubenswrapper[4705]: I0124 08:33:04.183668 4705 generic.go:334] "Generic (PLEG): container finished" podID="8f498e1a-b565-40e1-96ae-6af81995e5d9" containerID="09d4bd8406e40165cdf7ac442424b56ae5df0272859d547eb92aa331df81d488" exitCode=0 Jan 24 08:33:04 crc kubenswrapper[4705]: I0124 08:33:04.183724 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-x52km" event={"ID":"8f498e1a-b565-40e1-96ae-6af81995e5d9","Type":"ContainerDied","Data":"09d4bd8406e40165cdf7ac442424b56ae5df0272859d547eb92aa331df81d488"} Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.556858 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.562846 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x52km" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.637390 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c82ed6b-9321-49fa-a79c-76c390cb0d50-operator-scripts\") pod \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.637567 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8wl5\" (UniqueName: \"kubernetes.io/projected/8f498e1a-b565-40e1-96ae-6af81995e5d9-kube-api-access-s8wl5\") pod \"8f498e1a-b565-40e1-96ae-6af81995e5d9\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.637622 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fwps\" (UniqueName: \"kubernetes.io/projected/0c82ed6b-9321-49fa-a79c-76c390cb0d50-kube-api-access-7fwps\") pod \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\" (UID: \"0c82ed6b-9321-49fa-a79c-76c390cb0d50\") " Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.637739 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f498e1a-b565-40e1-96ae-6af81995e5d9-operator-scripts\") pod \"8f498e1a-b565-40e1-96ae-6af81995e5d9\" (UID: \"8f498e1a-b565-40e1-96ae-6af81995e5d9\") " Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.638245 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f498e1a-b565-40e1-96ae-6af81995e5d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f498e1a-b565-40e1-96ae-6af81995e5d9" (UID: "8f498e1a-b565-40e1-96ae-6af81995e5d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.638565 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c82ed6b-9321-49fa-a79c-76c390cb0d50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c82ed6b-9321-49fa-a79c-76c390cb0d50" (UID: "0c82ed6b-9321-49fa-a79c-76c390cb0d50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.643181 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f498e1a-b565-40e1-96ae-6af81995e5d9-kube-api-access-s8wl5" (OuterVolumeSpecName: "kube-api-access-s8wl5") pod "8f498e1a-b565-40e1-96ae-6af81995e5d9" (UID: "8f498e1a-b565-40e1-96ae-6af81995e5d9"). InnerVolumeSpecName "kube-api-access-s8wl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.644973 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c82ed6b-9321-49fa-a79c-76c390cb0d50-kube-api-access-7fwps" (OuterVolumeSpecName: "kube-api-access-7fwps") pod "0c82ed6b-9321-49fa-a79c-76c390cb0d50" (UID: "0c82ed6b-9321-49fa-a79c-76c390cb0d50"). InnerVolumeSpecName "kube-api-access-7fwps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.740092 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c82ed6b-9321-49fa-a79c-76c390cb0d50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.740129 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8wl5\" (UniqueName: \"kubernetes.io/projected/8f498e1a-b565-40e1-96ae-6af81995e5d9-kube-api-access-s8wl5\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.740141 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fwps\" (UniqueName: \"kubernetes.io/projected/0c82ed6b-9321-49fa-a79c-76c390cb0d50-kube-api-access-7fwps\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:05 crc kubenswrapper[4705]: I0124 08:33:05.740150 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f498e1a-b565-40e1-96ae-6af81995e5d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:06 crc kubenswrapper[4705]: I0124 08:33:06.206115 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-x52km" event={"ID":"8f498e1a-b565-40e1-96ae-6af81995e5d9","Type":"ContainerDied","Data":"cf25f2dcf7c63e94050e754227513af371795ac9b31f7981fb8b0167de317ae2"} Jan 24 08:33:06 crc kubenswrapper[4705]: I0124 08:33:06.206166 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf25f2dcf7c63e94050e754227513af371795ac9b31f7981fb8b0167de317ae2" Jan 24 08:33:06 crc kubenswrapper[4705]: I0124 08:33:06.206179 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x52km" Jan 24 08:33:06 crc kubenswrapper[4705]: I0124 08:33:06.207930 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a922-account-create-update-pnhgf" event={"ID":"0c82ed6b-9321-49fa-a79c-76c390cb0d50","Type":"ContainerDied","Data":"d9606a322402a49a25441915fc9803f31b1956dfef5e09bf6c0cb4c958c76f8b"} Jan 24 08:33:06 crc kubenswrapper[4705]: I0124 08:33:06.207959 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9606a322402a49a25441915fc9803f31b1956dfef5e09bf6c0cb4c958c76f8b" Jan 24 08:33:06 crc kubenswrapper[4705]: I0124 08:33:06.208027 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a922-account-create-update-pnhgf" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.354686 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-h2vff"] Jan 24 08:33:07 crc kubenswrapper[4705]: E0124 08:33:07.355400 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f498e1a-b565-40e1-96ae-6af81995e5d9" containerName="mariadb-database-create" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.355414 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f498e1a-b565-40e1-96ae-6af81995e5d9" containerName="mariadb-database-create" Jan 24 08:33:07 crc kubenswrapper[4705]: E0124 08:33:07.355437 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c82ed6b-9321-49fa-a79c-76c390cb0d50" containerName="mariadb-account-create-update" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.355443 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c82ed6b-9321-49fa-a79c-76c390cb0d50" containerName="mariadb-account-create-update" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.355804 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f498e1a-b565-40e1-96ae-6af81995e5d9" containerName="mariadb-database-create" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.355879 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c82ed6b-9321-49fa-a79c-76c390cb0d50" containerName="mariadb-account-create-update" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.356522 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.359586 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.361935 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-q9ztl" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.362646 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.368616 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.392696 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-scripts\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.392772 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-config-data\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.392988 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-combined-ca-bundle\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.393156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gjx\" (UniqueName: \"kubernetes.io/projected/09d5b7c8-025c-46dc-8be2-7fec273bcde4-kube-api-access-s4gjx\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.393669 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-h2vff"] Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.494864 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gjx\" (UniqueName: \"kubernetes.io/projected/09d5b7c8-025c-46dc-8be2-7fec273bcde4-kube-api-access-s4gjx\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.494980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-scripts\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.495026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-config-data\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.495107 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-combined-ca-bundle\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.500383 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-combined-ca-bundle\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.500493 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-scripts\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.506430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-config-data\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.514741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gjx\" (UniqueName: \"kubernetes.io/projected/09d5b7c8-025c-46dc-8be2-7fec273bcde4-kube-api-access-s4gjx\") pod \"aodh-db-sync-h2vff\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:07 crc kubenswrapper[4705]: I0124 08:33:07.714088 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:08 crc kubenswrapper[4705]: I0124 08:33:08.051218 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-h2vff"] Jan 24 08:33:08 crc kubenswrapper[4705]: I0124 08:33:08.258999 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h2vff" event={"ID":"09d5b7c8-025c-46dc-8be2-7fec273bcde4","Type":"ContainerStarted","Data":"1e1926f6f80789ec43ab19f3d33b49b9d692a181d70b487c4866c1c647b85033"} Jan 24 08:33:10 crc kubenswrapper[4705]: E0124 08:33:10.363652 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4257_e428_48ba_b079_9fb9d3453b90.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:33:12 crc kubenswrapper[4705]: I0124 08:33:12.348378 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h2vff" event={"ID":"09d5b7c8-025c-46dc-8be2-7fec273bcde4","Type":"ContainerStarted","Data":"41d88174b5d16303151aa6a7c501a900153e8ee1826bcea5961edeb1fdbff9f3"} Jan 24 08:33:12 crc kubenswrapper[4705]: I0124 08:33:12.367245 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-h2vff" podStartSLOduration=1.6429226049999999 podStartE2EDuration="5.367228293s" podCreationTimestamp="2026-01-24 08:33:07 +0000 UTC" firstStartedPulling="2026-01-24 08:33:08.060341942 +0000 UTC m=+3126.780215230" lastFinishedPulling="2026-01-24 08:33:11.78464763 +0000 UTC m=+3130.504520918" observedRunningTime="2026-01-24 08:33:12.366235894 +0000 UTC m=+3131.086109212" watchObservedRunningTime="2026-01-24 08:33:12.367228293 +0000 UTC m=+3131.087101581" Jan 24 08:33:13 crc kubenswrapper[4705]: I0124 08:33:13.576766 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:33:13 crc kubenswrapper[4705]: E0124 08:33:13.577185 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:33:15 crc kubenswrapper[4705]: I0124 08:33:15.388623 4705 generic.go:334] "Generic (PLEG): container finished" podID="09d5b7c8-025c-46dc-8be2-7fec273bcde4" containerID="41d88174b5d16303151aa6a7c501a900153e8ee1826bcea5961edeb1fdbff9f3" exitCode=0 Jan 24 08:33:15 crc kubenswrapper[4705]: I0124 08:33:15.388903 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h2vff" event={"ID":"09d5b7c8-025c-46dc-8be2-7fec273bcde4","Type":"ContainerDied","Data":"41d88174b5d16303151aa6a7c501a900153e8ee1826bcea5961edeb1fdbff9f3"} Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.810852 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.934788 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4gjx\" (UniqueName: \"kubernetes.io/projected/09d5b7c8-025c-46dc-8be2-7fec273bcde4-kube-api-access-s4gjx\") pod \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.935254 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-scripts\") pod \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.935280 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-combined-ca-bundle\") pod \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.935913 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-config-data\") pod \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\" (UID: \"09d5b7c8-025c-46dc-8be2-7fec273bcde4\") " Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.940496 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-scripts" (OuterVolumeSpecName: "scripts") pod "09d5b7c8-025c-46dc-8be2-7fec273bcde4" (UID: "09d5b7c8-025c-46dc-8be2-7fec273bcde4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.941069 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d5b7c8-025c-46dc-8be2-7fec273bcde4-kube-api-access-s4gjx" (OuterVolumeSpecName: "kube-api-access-s4gjx") pod "09d5b7c8-025c-46dc-8be2-7fec273bcde4" (UID: "09d5b7c8-025c-46dc-8be2-7fec273bcde4"). InnerVolumeSpecName "kube-api-access-s4gjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.964493 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09d5b7c8-025c-46dc-8be2-7fec273bcde4" (UID: "09d5b7c8-025c-46dc-8be2-7fec273bcde4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:16 crc kubenswrapper[4705]: I0124 08:33:16.965467 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-config-data" (OuterVolumeSpecName: "config-data") pod "09d5b7c8-025c-46dc-8be2-7fec273bcde4" (UID: "09d5b7c8-025c-46dc-8be2-7fec273bcde4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:17 crc kubenswrapper[4705]: I0124 08:33:17.039506 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4gjx\" (UniqueName: \"kubernetes.io/projected/09d5b7c8-025c-46dc-8be2-7fec273bcde4-kube-api-access-s4gjx\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:17 crc kubenswrapper[4705]: I0124 08:33:17.039544 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:17 crc kubenswrapper[4705]: I0124 08:33:17.039559 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:17 crc kubenswrapper[4705]: I0124 08:33:17.039570 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d5b7c8-025c-46dc-8be2-7fec273bcde4-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:17 crc kubenswrapper[4705]: I0124 08:33:17.406294 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h2vff" event={"ID":"09d5b7c8-025c-46dc-8be2-7fec273bcde4","Type":"ContainerDied","Data":"1e1926f6f80789ec43ab19f3d33b49b9d692a181d70b487c4866c1c647b85033"} Jan 24 08:33:17 crc kubenswrapper[4705]: I0124 08:33:17.406330 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e1926f6f80789ec43ab19f3d33b49b9d692a181d70b487c4866c1c647b85033" Jan 24 08:33:17 crc kubenswrapper[4705]: I0124 08:33:17.406713 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h2vff" Jan 24 08:33:20 crc kubenswrapper[4705]: E0124 08:33:20.689614 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4257_e428_48ba_b079_9fb9d3453b90.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.855244 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 24 08:33:21 crc kubenswrapper[4705]: E0124 08:33:21.856101 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d5b7c8-025c-46dc-8be2-7fec273bcde4" containerName="aodh-db-sync" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.856120 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d5b7c8-025c-46dc-8be2-7fec273bcde4" containerName="aodh-db-sync" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.856402 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d5b7c8-025c-46dc-8be2-7fec273bcde4" containerName="aodh-db-sync" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.858603 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.861966 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.862047 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-q9ztl" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.862263 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.866272 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.971103 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zcc\" (UniqueName: \"kubernetes.io/projected/38e292be-6948-45cc-b208-ea9bba97622a-kube-api-access-z9zcc\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.971291 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-config-data\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.971405 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-scripts\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:21 crc kubenswrapper[4705]: I0124 08:33:21.971631 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.072752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.072879 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zcc\" (UniqueName: \"kubernetes.io/projected/38e292be-6948-45cc-b208-ea9bba97622a-kube-api-access-z9zcc\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.072911 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-config-data\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.072944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-scripts\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.079332 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-scripts\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.080211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-config-data\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.089599 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.090298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zcc\" (UniqueName: \"kubernetes.io/projected/38e292be-6948-45cc-b208-ea9bba97622a-kube-api-access-z9zcc\") pod \"aodh-0\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.267036 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:33:22 crc kubenswrapper[4705]: I0124 08:33:22.818725 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:33:23 crc kubenswrapper[4705]: W0124 08:33:23.040604 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e292be_6948_45cc_b208_ea9bba97622a.slice/crio-aa2fea30b46fcbd7f13425cca6a8162eeecd23f0a6b174691b78bf6e55dc0553 WatchSource:0}: Error finding container aa2fea30b46fcbd7f13425cca6a8162eeecd23f0a6b174691b78bf6e55dc0553: Status 404 returned error can't find the container with id aa2fea30b46fcbd7f13425cca6a8162eeecd23f0a6b174691b78bf6e55dc0553 Jan 24 08:33:23 crc kubenswrapper[4705]: I0124 08:33:23.466227 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerStarted","Data":"aa2fea30b46fcbd7f13425cca6a8162eeecd23f0a6b174691b78bf6e55dc0553"} Jan 24 08:33:24 crc kubenswrapper[4705]: I0124 08:33:24.539236 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerStarted","Data":"c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4"} Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.187133 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.187960 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="proxy-httpd" containerID="cri-o://370c21714590a8fc27d58b1d7ae90058a81f95823e6ae8f18182f185bf1c0b1c" gracePeriod=30 Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.188104 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-central-agent" containerID="cri-o://828f0409a2667a6f31b69c64a012453123202846e0feedd2a9d3751f2a5d0df1" gracePeriod=30 Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.187961 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-notification-agent" containerID="cri-o://a1dfeca9415ce440fa6bf6a5fcefee47a1fb74d7b847e8656d7f54684348201e" gracePeriod=30 Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.187960 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="sg-core" containerID="cri-o://c7991ce896e8845d6895bcceeb6dc54cb4bd749d63d0c2fd88c63a3096c657f4" gracePeriod=30 Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.567862 4705 generic.go:334] "Generic (PLEG): container finished" podID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerID="370c21714590a8fc27d58b1d7ae90058a81f95823e6ae8f18182f185bf1c0b1c" exitCode=0 Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.567902 4705 generic.go:334] "Generic (PLEG): container finished" podID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerID="c7991ce896e8845d6895bcceeb6dc54cb4bd749d63d0c2fd88c63a3096c657f4" exitCode=2 Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.567925 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerDied","Data":"370c21714590a8fc27d58b1d7ae90058a81f95823e6ae8f18182f185bf1c0b1c"} Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.567954 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerDied","Data":"c7991ce896e8845d6895bcceeb6dc54cb4bd749d63d0c2fd88c63a3096c657f4"} Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.578062 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:33:25 crc kubenswrapper[4705]: E0124 08:33:25.578650 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:33:25 crc kubenswrapper[4705]: I0124 08:33:25.876288 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:33:26 crc kubenswrapper[4705]: I0124 08:33:26.580321 4705 generic.go:334] "Generic (PLEG): container finished" podID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerID="828f0409a2667a6f31b69c64a012453123202846e0feedd2a9d3751f2a5d0df1" exitCode=0 Jan 24 08:33:26 crc kubenswrapper[4705]: I0124 08:33:26.580535 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerDied","Data":"828f0409a2667a6f31b69c64a012453123202846e0feedd2a9d3751f2a5d0df1"} Jan 24 08:33:26 crc kubenswrapper[4705]: I0124 08:33:26.583626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerStarted","Data":"9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23"} Jan 24 08:33:28 crc kubenswrapper[4705]: I0124 08:33:28.803268 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerStarted","Data":"eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7"} Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.931011 4705 generic.go:334] "Generic (PLEG): container finished" podID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerID="a1dfeca9415ce440fa6bf6a5fcefee47a1fb74d7b847e8656d7f54684348201e" exitCode=0 Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.931120 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerDied","Data":"a1dfeca9415ce440fa6bf6a5fcefee47a1fb74d7b847e8656d7f54684348201e"} Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.934394 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerStarted","Data":"196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88"} Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.934538 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-api" containerID="cri-o://c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4" gracePeriod=30 Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.934632 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-notifier" containerID="cri-o://eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7" gracePeriod=30 Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.934698 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-evaluator" containerID="cri-o://9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23" gracePeriod=30 Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.934782 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-listener" containerID="cri-o://196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88" gracePeriod=30 Jan 24 08:33:30 crc kubenswrapper[4705]: E0124 08:33:30.956094 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4257_e428_48ba_b079_9fb9d3453b90.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:33:30 crc kubenswrapper[4705]: I0124 08:33:30.981638 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.260110111 podStartE2EDuration="9.981608918s" podCreationTimestamp="2026-01-24 08:33:21 +0000 UTC" firstStartedPulling="2026-01-24 08:33:23.072030374 +0000 UTC m=+3141.791903662" lastFinishedPulling="2026-01-24 08:33:29.793529181 +0000 UTC m=+3148.513402469" observedRunningTime="2026-01-24 08:33:30.963255747 +0000 UTC m=+3149.683129035" watchObservedRunningTime="2026-01-24 08:33:30.981608918 +0000 UTC m=+3149.701482206" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.053490 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.145748 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-run-httpd\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.145880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spvvc\" (UniqueName: \"kubernetes.io/projected/c321cd32-3e43-4eb3-aa7a-b5e67978d976-kube-api-access-spvvc\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.146072 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-scripts\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.146180 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-combined-ca-bundle\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.146209 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-log-httpd\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.146238 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-config-data\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.146300 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-sg-core-conf-yaml\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.146344 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-ceilometer-tls-certs\") pod \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\" (UID: \"c321cd32-3e43-4eb3-aa7a-b5e67978d976\") " Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.147394 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.147703 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.159017 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-scripts" (OuterVolumeSpecName: "scripts") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.161084 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c321cd32-3e43-4eb3-aa7a-b5e67978d976-kube-api-access-spvvc" (OuterVolumeSpecName: "kube-api-access-spvvc") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "kube-api-access-spvvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.181858 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.249471 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.249512 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spvvc\" (UniqueName: \"kubernetes.io/projected/c321cd32-3e43-4eb3-aa7a-b5e67978d976-kube-api-access-spvvc\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.249533 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.249549 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c321cd32-3e43-4eb3-aa7a-b5e67978d976-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.249560 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.412739 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.414102 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.445843 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.452872 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-config-data" (OuterVolumeSpecName: "config-data") pod "c321cd32-3e43-4eb3-aa7a-b5e67978d976" (UID: "c321cd32-3e43-4eb3-aa7a-b5e67978d976"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.516363 4705 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.516433 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c321cd32-3e43-4eb3-aa7a-b5e67978d976-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.946491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c321cd32-3e43-4eb3-aa7a-b5e67978d976","Type":"ContainerDied","Data":"a3dd6fd67a0e486566ad7c61ae953da9e58e70e6ce090da208c53509752b3b4e"} Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.946568 4705 scope.go:117] "RemoveContainer" containerID="370c21714590a8fc27d58b1d7ae90058a81f95823e6ae8f18182f185bf1c0b1c" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.946570 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.951129 4705 generic.go:334] "Generic (PLEG): container finished" podID="38e292be-6948-45cc-b208-ea9bba97622a" containerID="eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7" exitCode=0 Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.951174 4705 generic.go:334] "Generic (PLEG): container finished" podID="38e292be-6948-45cc-b208-ea9bba97622a" containerID="9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23" exitCode=0 Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.951184 4705 generic.go:334] "Generic (PLEG): container finished" podID="38e292be-6948-45cc-b208-ea9bba97622a" containerID="c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4" exitCode=0 Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.951175 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerDied","Data":"eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7"} Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.951243 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerDied","Data":"9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23"} Jan 24 08:33:31 crc kubenswrapper[4705]: I0124 08:33:31.951255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerDied","Data":"c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4"} Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.016553 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.019368 4705 scope.go:117] "RemoveContainer" containerID="c7991ce896e8845d6895bcceeb6dc54cb4bd749d63d0c2fd88c63a3096c657f4" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.028217 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.044628 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:33:32 crc kubenswrapper[4705]: E0124 08:33:32.045207 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="sg-core" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045267 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="sg-core" Jan 24 08:33:32 crc kubenswrapper[4705]: E0124 08:33:32.045285 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-notification-agent" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045294 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-notification-agent" Jan 24 08:33:32 crc kubenswrapper[4705]: E0124 08:33:32.045314 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="proxy-httpd" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045321 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="proxy-httpd" Jan 24 08:33:32 crc kubenswrapper[4705]: E0124 08:33:32.045352 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-central-agent" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045360 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-central-agent" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045604 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="sg-core" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045625 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-notification-agent" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045643 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="proxy-httpd" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.045660 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" containerName="ceilometer-central-agent" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.047496 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.047797 4705 scope.go:117] "RemoveContainer" containerID="a1dfeca9415ce440fa6bf6a5fcefee47a1fb74d7b847e8656d7f54684348201e" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.049696 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.050715 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.053666 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.085123 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.088696 4705 scope.go:117] "RemoveContainer" containerID="828f0409a2667a6f31b69c64a012453123202846e0feedd2a9d3751f2a5d0df1" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.131890 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.131934 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d641fc4-49a3-4686-9839-730afa8afd5d-log-httpd\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.131979 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.132012 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-config-data\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.132051 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.132104 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-scripts\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.132144 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmtg\" (UniqueName: \"kubernetes.io/projected/7d641fc4-49a3-4686-9839-730afa8afd5d-kube-api-access-zbmtg\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.132237 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d641fc4-49a3-4686-9839-730afa8afd5d-run-httpd\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.233969 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.234282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d641fc4-49a3-4686-9839-730afa8afd5d-log-httpd\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.234411 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.234531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-config-data\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.234643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.234770 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-scripts\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.234888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbmtg\" (UniqueName: \"kubernetes.io/projected/7d641fc4-49a3-4686-9839-730afa8afd5d-kube-api-access-zbmtg\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.235599 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d641fc4-49a3-4686-9839-730afa8afd5d-run-httpd\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.234903 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d641fc4-49a3-4686-9839-730afa8afd5d-log-httpd\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.235939 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d641fc4-49a3-4686-9839-730afa8afd5d-run-httpd\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.239626 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.239642 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-scripts\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.239727 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.240647 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-config-data\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.253540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d641fc4-49a3-4686-9839-730afa8afd5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.259727 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbmtg\" (UniqueName: \"kubernetes.io/projected/7d641fc4-49a3-4686-9839-730afa8afd5d-kube-api-access-zbmtg\") pod \"ceilometer-0\" (UID: \"7d641fc4-49a3-4686-9839-730afa8afd5d\") " pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.369171 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 08:33:32 crc kubenswrapper[4705]: I0124 08:33:32.965891 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 08:33:33 crc kubenswrapper[4705]: I0124 08:33:33.625154 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c321cd32-3e43-4eb3-aa7a-b5e67978d976" path="/var/lib/kubelet/pods/c321cd32-3e43-4eb3-aa7a-b5e67978d976/volumes" Jan 24 08:33:33 crc kubenswrapper[4705]: I0124 08:33:33.985942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d641fc4-49a3-4686-9839-730afa8afd5d","Type":"ContainerStarted","Data":"c1d68e05fb109d3d4f8e03d1b0adc83b8a036aca0a01bbb87b752d12932cd300"} Jan 24 08:33:35 crc kubenswrapper[4705]: I0124 08:33:35.002972 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d641fc4-49a3-4686-9839-730afa8afd5d","Type":"ContainerStarted","Data":"ed928d27360ed90beec2aec0beabba53a4a716a4c07602c27da0fc9538d2204c"} Jan 24 08:33:35 crc kubenswrapper[4705]: I0124 08:33:35.003314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d641fc4-49a3-4686-9839-730afa8afd5d","Type":"ContainerStarted","Data":"841ee2acdae380bfba4cf719027b96f246c8cc83c5bfcbd9ec57e32b6617a77b"} Jan 24 08:33:36 crc kubenswrapper[4705]: I0124 08:33:36.020633 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d641fc4-49a3-4686-9839-730afa8afd5d","Type":"ContainerStarted","Data":"459643ff2e4b9ee466b5468c95d1d0863d0b565b9fd5194f5f814d3db14888bf"} Jan 24 08:33:38 crc kubenswrapper[4705]: I0124 08:33:38.273777 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d641fc4-49a3-4686-9839-730afa8afd5d","Type":"ContainerStarted","Data":"10cd4418b6eeeb32a24854701fab1b63e0b5e6f79bd771fbf268dafd3e633b74"} Jan 24 08:33:38 crc kubenswrapper[4705]: I0124 08:33:38.276469 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 08:33:38 crc kubenswrapper[4705]: I0124 08:33:38.297105 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.84509318 podStartE2EDuration="6.297082185s" podCreationTimestamp="2026-01-24 08:33:32 +0000 UTC" firstStartedPulling="2026-01-24 08:33:32.980607208 +0000 UTC m=+3151.700480496" lastFinishedPulling="2026-01-24 08:33:37.432596213 +0000 UTC m=+3156.152469501" observedRunningTime="2026-01-24 08:33:38.295692875 +0000 UTC m=+3157.015566163" watchObservedRunningTime="2026-01-24 08:33:38.297082185 +0000 UTC m=+3157.016955473" Jan 24 08:33:39 crc kubenswrapper[4705]: I0124 08:33:39.575667 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:33:39 crc kubenswrapper[4705]: E0124 08:33:39.575981 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:33:41 crc kubenswrapper[4705]: E0124 08:33:41.243562 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4257_e428_48ba_b079_9fb9d3453b90.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:33:50 crc kubenswrapper[4705]: I0124 08:33:50.575303 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:33:50 crc kubenswrapper[4705]: E0124 08:33:50.576045 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:33:51 crc kubenswrapper[4705]: E0124 08:33:51.514873 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4257_e428_48ba_b079_9fb9d3453b90.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.320368 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.477152 4705 generic.go:334] "Generic (PLEG): container finished" podID="38e292be-6948-45cc-b208-ea9bba97622a" containerID="196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88" exitCode=137 Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.477465 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerDied","Data":"196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88"} Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.477509 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38e292be-6948-45cc-b208-ea9bba97622a","Type":"ContainerDied","Data":"aa2fea30b46fcbd7f13425cca6a8162eeecd23f0a6b174691b78bf6e55dc0553"} Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.477526 4705 scope.go:117] "RemoveContainer" containerID="196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.477693 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.504519 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-combined-ca-bundle\") pod \"38e292be-6948-45cc-b208-ea9bba97622a\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.504663 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-scripts\") pod \"38e292be-6948-45cc-b208-ea9bba97622a\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.504757 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-config-data\") pod \"38e292be-6948-45cc-b208-ea9bba97622a\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.504859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9zcc\" (UniqueName: \"kubernetes.io/projected/38e292be-6948-45cc-b208-ea9bba97622a-kube-api-access-z9zcc\") pod \"38e292be-6948-45cc-b208-ea9bba97622a\" (UID: \"38e292be-6948-45cc-b208-ea9bba97622a\") " Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.506941 4705 scope.go:117] "RemoveContainer" containerID="eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.512465 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e292be-6948-45cc-b208-ea9bba97622a-kube-api-access-z9zcc" (OuterVolumeSpecName: "kube-api-access-z9zcc") pod "38e292be-6948-45cc-b208-ea9bba97622a" (UID: "38e292be-6948-45cc-b208-ea9bba97622a"). InnerVolumeSpecName "kube-api-access-z9zcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.516154 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-scripts" (OuterVolumeSpecName: "scripts") pod "38e292be-6948-45cc-b208-ea9bba97622a" (UID: "38e292be-6948-45cc-b208-ea9bba97622a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.591563 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.592079 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.609572 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.609608 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9zcc\" (UniqueName: \"kubernetes.io/projected/38e292be-6948-45cc-b208-ea9bba97622a-kube-api-access-z9zcc\") on node \"crc\" DevicePath \"\"" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.634502 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38e292be-6948-45cc-b208-ea9bba97622a" (UID: "38e292be-6948-45cc-b208-ea9bba97622a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.639484 4705 scope.go:117] "RemoveContainer" containerID="9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.646033 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-config-data" (OuterVolumeSpecName: "config-data") pod "38e292be-6948-45cc-b208-ea9bba97622a" (UID: "38e292be-6948-45cc-b208-ea9bba97622a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.664935 4705 scope.go:117] "RemoveContainer" containerID="c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.686680 4705 scope.go:117] "RemoveContainer" containerID="196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.687188 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88\": container with ID starting with 196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88 not found: ID does not exist" containerID="196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.687245 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88"} err="failed to get container status \"196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88\": rpc error: code = NotFound desc = could not find container \"196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88\": container with ID starting with 196d7a5b67611198fcd4bf5076f4d95f5d980c1d5826f0396117a9a815d94c88 not found: ID does not exist" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.687282 4705 scope.go:117] "RemoveContainer" containerID="eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.687508 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7\": container with ID starting with eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7 not found: ID does not exist" containerID="eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.687528 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7"} err="failed to get container status \"eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7\": rpc error: code = NotFound desc = could not find container \"eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7\": container with ID starting with eb599ba551ec743aa4e8bd465d95f9f7664378cac9c8714982a659dfb4a323e7 not found: ID does not exist" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.687543 4705 scope.go:117] "RemoveContainer" containerID="9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.687962 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23\": container with ID starting with 9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23 not found: ID does not exist" containerID="9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.687998 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23"} err="failed to get container status \"9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23\": rpc error: code = NotFound desc = could not find container \"9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23\": container with ID starting with 9f1326cec49bcff6174d4ed5cf8908059932a71b11e47c91a6c5b2a132868e23 not found: ID does not exist" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.688015 4705 scope.go:117] "RemoveContainer" containerID="c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.688234 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4\": container with ID starting with c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4 not found: ID does not exist" containerID="c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.688269 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4"} err="failed to get container status \"c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4\": rpc error: code = NotFound desc = could not find container \"c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4\": container with ID starting with c53ae0b041a027b8a602acb1d9173aadbc995048f7c4c464a84dd29fdcf478c4 not found: ID does not exist" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.711590 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.711629 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e292be-6948-45cc-b208-ea9bba97622a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.775136 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4257_e428_48ba_b079_9fb9d3453b90.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.829642 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.850162 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.865612 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.866111 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-evaluator" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866134 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-evaluator" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.866152 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-notifier" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866159 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-notifier" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.866172 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-api" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866180 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-api" Jan 24 08:34:01 crc kubenswrapper[4705]: E0124 08:34:01.866209 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-listener" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866214 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-listener" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866599 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-listener" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866630 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-notifier" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866655 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-api" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.866670 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e292be-6948-45cc-b208-ea9bba97622a" containerName="aodh-evaluator" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.868784 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.872890 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.873068 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.873170 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.873266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-q9ztl" Jan 24 08:34:01 crc kubenswrapper[4705]: I0124 08:34:01.873542 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.073192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-internal-tls-certs\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.073500 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.073530 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-public-tls-certs\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.073596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-scripts\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.073650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvsc5\" (UniqueName: \"kubernetes.io/projected/a5b63e41-f8bd-4ac0-b23e-3716b5098194-kube-api-access-dvsc5\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.073743 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-config-data\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.081567 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.175267 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-internal-tls-certs\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.175338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.175373 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-public-tls-certs\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.175468 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-scripts\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.175557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvsc5\" (UniqueName: \"kubernetes.io/projected/a5b63e41-f8bd-4ac0-b23e-3716b5098194-kube-api-access-dvsc5\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.175694 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-config-data\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.183654 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-internal-tls-certs\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.183717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-public-tls-certs\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.184440 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-config-data\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.184491 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.184909 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-scripts\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.205802 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvsc5\" (UniqueName: \"kubernetes.io/projected/a5b63e41-f8bd-4ac0-b23e-3716b5098194-kube-api-access-dvsc5\") pod \"aodh-0\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.387640 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.436141 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:34:02 crc kubenswrapper[4705]: I0124 08:34:02.959636 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:34:02 crc kubenswrapper[4705]: W0124 08:34:02.964875 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5b63e41_f8bd_4ac0_b23e_3716b5098194.slice/crio-df5a0218be6960182ae34c3f93369ad42da048df7d1f3834996a1bc668f46852 WatchSource:0}: Error finding container df5a0218be6960182ae34c3f93369ad42da048df7d1f3834996a1bc668f46852: Status 404 returned error can't find the container with id df5a0218be6960182ae34c3f93369ad42da048df7d1f3834996a1bc668f46852 Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.021752 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b8mgp"] Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.024496 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.034892 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8mgp"] Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.285884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-catalog-content\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.285971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q5hj\" (UniqueName: \"kubernetes.io/projected/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-kube-api-access-6q5hj\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.286181 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-utilities\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.389107 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-catalog-content\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.389164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q5hj\" (UniqueName: \"kubernetes.io/projected/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-kube-api-access-6q5hj\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.389228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-utilities\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.389848 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-utilities\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.394289 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-catalog-content\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.415658 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q5hj\" (UniqueName: \"kubernetes.io/projected/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-kube-api-access-6q5hj\") pod \"redhat-marketplace-b8mgp\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.525925 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerStarted","Data":"df5a0218be6960182ae34c3f93369ad42da048df7d1f3834996a1bc668f46852"} Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.591800 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e292be-6948-45cc-b208-ea9bba97622a" path="/var/lib/kubelet/pods/38e292be-6948-45cc-b208-ea9bba97622a/volumes" Jan 24 08:34:03 crc kubenswrapper[4705]: I0124 08:34:03.646685 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:04 crc kubenswrapper[4705]: W0124 08:34:04.128252 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda767e4f_5c8c_4c09_be0b_e3dd460dcf95.slice/crio-c6bcf252e2ac014ccb9d8bf8a79f69ec9bc0473206241272990f20bed7aee34f WatchSource:0}: Error finding container c6bcf252e2ac014ccb9d8bf8a79f69ec9bc0473206241272990f20bed7aee34f: Status 404 returned error can't find the container with id c6bcf252e2ac014ccb9d8bf8a79f69ec9bc0473206241272990f20bed7aee34f Jan 24 08:34:04 crc kubenswrapper[4705]: I0124 08:34:04.129453 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8mgp"] Jan 24 08:34:04 crc kubenswrapper[4705]: I0124 08:34:04.539012 4705 generic.go:334] "Generic (PLEG): container finished" podID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerID="2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6" exitCode=0 Jan 24 08:34:04 crc kubenswrapper[4705]: I0124 08:34:04.539105 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8mgp" event={"ID":"da767e4f-5c8c-4c09-be0b-e3dd460dcf95","Type":"ContainerDied","Data":"2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6"} Jan 24 08:34:04 crc kubenswrapper[4705]: I0124 08:34:04.539400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8mgp" event={"ID":"da767e4f-5c8c-4c09-be0b-e3dd460dcf95","Type":"ContainerStarted","Data":"c6bcf252e2ac014ccb9d8bf8a79f69ec9bc0473206241272990f20bed7aee34f"} Jan 24 08:34:04 crc kubenswrapper[4705]: I0124 08:34:04.540776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerStarted","Data":"4733392d0e8b6b83304edce8c62741e3c2cd02c2dcbc80a50e83321e50931efa"} Jan 24 08:34:05 crc kubenswrapper[4705]: I0124 08:34:05.553044 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8mgp" event={"ID":"da767e4f-5c8c-4c09-be0b-e3dd460dcf95","Type":"ContainerStarted","Data":"1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f"} Jan 24 08:34:05 crc kubenswrapper[4705]: I0124 08:34:05.558565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerStarted","Data":"19c74d544773422095c338a3830bf3840932aa8197e513d405799042c562fd03"} Jan 24 08:34:05 crc kubenswrapper[4705]: I0124 08:34:05.558613 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerStarted","Data":"aabba643082d6b2153cc8d470336f8167a3344652db41623a774d7f6cd9cdf26"} Jan 24 08:34:06 crc kubenswrapper[4705]: I0124 08:34:06.591367 4705 generic.go:334] "Generic (PLEG): container finished" podID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerID="1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f" exitCode=0 Jan 24 08:34:06 crc kubenswrapper[4705]: I0124 08:34:06.593354 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8mgp" event={"ID":"da767e4f-5c8c-4c09-be0b-e3dd460dcf95","Type":"ContainerDied","Data":"1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f"} Jan 24 08:34:07 crc kubenswrapper[4705]: I0124 08:34:07.603056 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8mgp" event={"ID":"da767e4f-5c8c-4c09-be0b-e3dd460dcf95","Type":"ContainerStarted","Data":"a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27"} Jan 24 08:34:07 crc kubenswrapper[4705]: I0124 08:34:07.608126 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerStarted","Data":"509d159820b319cd0ad5cd7133dfbaf601cdd5aa0e98ccb0686009c9cc60f3c9"} Jan 24 08:34:07 crc kubenswrapper[4705]: I0124 08:34:07.624941 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b8mgp" podStartSLOduration=3.136590314 podStartE2EDuration="5.624903041s" podCreationTimestamp="2026-01-24 08:34:02 +0000 UTC" firstStartedPulling="2026-01-24 08:34:04.542563801 +0000 UTC m=+3183.262437089" lastFinishedPulling="2026-01-24 08:34:07.030876528 +0000 UTC m=+3185.750749816" observedRunningTime="2026-01-24 08:34:07.617487106 +0000 UTC m=+3186.337360394" watchObservedRunningTime="2026-01-24 08:34:07.624903041 +0000 UTC m=+3186.344776329" Jan 24 08:34:07 crc kubenswrapper[4705]: I0124 08:34:07.653174 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.217117273 podStartE2EDuration="6.653140457s" podCreationTimestamp="2026-01-24 08:34:01 +0000 UTC" firstStartedPulling="2026-01-24 08:34:02.967323012 +0000 UTC m=+3181.687196300" lastFinishedPulling="2026-01-24 08:34:06.403346196 +0000 UTC m=+3185.123219484" observedRunningTime="2026-01-24 08:34:07.638662909 +0000 UTC m=+3186.358536217" watchObservedRunningTime="2026-01-24 08:34:07.653140457 +0000 UTC m=+3186.373013745" Jan 24 08:34:12 crc kubenswrapper[4705]: I0124 08:34:12.577006 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:34:12 crc kubenswrapper[4705]: E0124 08:34:12.577966 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:34:13 crc kubenswrapper[4705]: I0124 08:34:13.647300 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:13 crc kubenswrapper[4705]: I0124 08:34:13.647391 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:13 crc kubenswrapper[4705]: I0124 08:34:13.705721 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:14 crc kubenswrapper[4705]: I0124 08:34:14.718840 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:14 crc kubenswrapper[4705]: I0124 08:34:14.769225 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8mgp"] Jan 24 08:34:16 crc kubenswrapper[4705]: I0124 08:34:16.686227 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b8mgp" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="registry-server" containerID="cri-o://a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27" gracePeriod=2 Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.186346 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.231857 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q5hj\" (UniqueName: \"kubernetes.io/projected/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-kube-api-access-6q5hj\") pod \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.232137 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-catalog-content\") pod \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.232307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-utilities\") pod \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\" (UID: \"da767e4f-5c8c-4c09-be0b-e3dd460dcf95\") " Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.233260 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-utilities" (OuterVolumeSpecName: "utilities") pod "da767e4f-5c8c-4c09-be0b-e3dd460dcf95" (UID: "da767e4f-5c8c-4c09-be0b-e3dd460dcf95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.240889 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-kube-api-access-6q5hj" (OuterVolumeSpecName: "kube-api-access-6q5hj") pod "da767e4f-5c8c-4c09-be0b-e3dd460dcf95" (UID: "da767e4f-5c8c-4c09-be0b-e3dd460dcf95"). InnerVolumeSpecName "kube-api-access-6q5hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.258778 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da767e4f-5c8c-4c09-be0b-e3dd460dcf95" (UID: "da767e4f-5c8c-4c09-be0b-e3dd460dcf95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.335308 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q5hj\" (UniqueName: \"kubernetes.io/projected/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-kube-api-access-6q5hj\") on node \"crc\" DevicePath \"\"" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.335358 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.335367 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da767e4f-5c8c-4c09-be0b-e3dd460dcf95-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.696290 4705 generic.go:334] "Generic (PLEG): container finished" podID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerID="a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27" exitCode=0 Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.696330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8mgp" event={"ID":"da767e4f-5c8c-4c09-be0b-e3dd460dcf95","Type":"ContainerDied","Data":"a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27"} Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.696351 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8mgp" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.696365 4705 scope.go:117] "RemoveContainer" containerID="a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.696355 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8mgp" event={"ID":"da767e4f-5c8c-4c09-be0b-e3dd460dcf95","Type":"ContainerDied","Data":"c6bcf252e2ac014ccb9d8bf8a79f69ec9bc0473206241272990f20bed7aee34f"} Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.719948 4705 scope.go:117] "RemoveContainer" containerID="1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.720442 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8mgp"] Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.740792 4705 scope.go:117] "RemoveContainer" containerID="2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.742853 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8mgp"] Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.792147 4705 scope.go:117] "RemoveContainer" containerID="a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27" Jan 24 08:34:17 crc kubenswrapper[4705]: E0124 08:34:17.792762 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27\": container with ID starting with a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27 not found: ID does not exist" containerID="a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.792802 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27"} err="failed to get container status \"a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27\": rpc error: code = NotFound desc = could not find container \"a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27\": container with ID starting with a8824331a6e5a246bb19378402e6f09b41c8eeaf0ce25c85c0027d5fda4e4a27 not found: ID does not exist" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.792913 4705 scope.go:117] "RemoveContainer" containerID="1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f" Jan 24 08:34:17 crc kubenswrapper[4705]: E0124 08:34:17.793339 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f\": container with ID starting with 1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f not found: ID does not exist" containerID="1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.793382 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f"} err="failed to get container status \"1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f\": rpc error: code = NotFound desc = could not find container \"1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f\": container with ID starting with 1d62795d8780016ca135cf361a54cf37f269bd67ae7c35382f53d10340ac1a0f not found: ID does not exist" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.793414 4705 scope.go:117] "RemoveContainer" containerID="2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6" Jan 24 08:34:17 crc kubenswrapper[4705]: E0124 08:34:17.793994 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6\": container with ID starting with 2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6 not found: ID does not exist" containerID="2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6" Jan 24 08:34:17 crc kubenswrapper[4705]: I0124 08:34:17.794027 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6"} err="failed to get container status \"2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6\": rpc error: code = NotFound desc = could not find container \"2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6\": container with ID starting with 2e41b5e4a3fd9aec077951d246920980cda98b2ab9d5904b52c319b1b7e423b6 not found: ID does not exist" Jan 24 08:34:19 crc kubenswrapper[4705]: I0124 08:34:19.588095 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" path="/var/lib/kubelet/pods/da767e4f-5c8c-4c09-be0b-e3dd460dcf95/volumes" Jan 24 08:34:23 crc kubenswrapper[4705]: I0124 08:34:23.575857 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:34:23 crc kubenswrapper[4705]: E0124 08:34:23.576552 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:34:36 crc kubenswrapper[4705]: I0124 08:34:36.575485 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:34:36 crc kubenswrapper[4705]: E0124 08:34:36.576267 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:34:51 crc kubenswrapper[4705]: I0124 08:34:51.583232 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:34:51 crc kubenswrapper[4705]: E0124 08:34:51.585010 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:35:02 crc kubenswrapper[4705]: I0124 08:35:02.575892 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:35:02 crc kubenswrapper[4705]: E0124 08:35:02.576646 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.294264 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-szdtb"] Jan 24 08:35:13 crc kubenswrapper[4705]: E0124 08:35:13.295426 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="extract-content" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.295446 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="extract-content" Jan 24 08:35:13 crc kubenswrapper[4705]: E0124 08:35:13.295483 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="extract-utilities" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.295491 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="extract-utilities" Jan 24 08:35:13 crc kubenswrapper[4705]: E0124 08:35:13.295509 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="registry-server" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.295517 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="registry-server" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.295788 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="da767e4f-5c8c-4c09-be0b-e3dd460dcf95" containerName="registry-server" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.297559 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.328697 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szdtb"] Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.533857 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-utilities\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.533908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr7ds\" (UniqueName: \"kubernetes.io/projected/ba41ca23-3541-4685-9dde-556d09c9a272-kube-api-access-wr7ds\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.534149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-catalog-content\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.635643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-catalog-content\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.635738 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-utilities\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.635756 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr7ds\" (UniqueName: \"kubernetes.io/projected/ba41ca23-3541-4685-9dde-556d09c9a272-kube-api-access-wr7ds\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.636885 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-catalog-content\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.637046 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-utilities\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.676154 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr7ds\" (UniqueName: \"kubernetes.io/projected/ba41ca23-3541-4685-9dde-556d09c9a272-kube-api-access-wr7ds\") pod \"redhat-operators-szdtb\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:13 crc kubenswrapper[4705]: I0124 08:35:13.939722 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:14 crc kubenswrapper[4705]: I0124 08:35:14.420429 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szdtb"] Jan 24 08:35:14 crc kubenswrapper[4705]: I0124 08:35:14.608475 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:35:14 crc kubenswrapper[4705]: E0124 08:35:14.608965 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:35:14 crc kubenswrapper[4705]: I0124 08:35:14.622061 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szdtb" event={"ID":"ba41ca23-3541-4685-9dde-556d09c9a272","Type":"ContainerStarted","Data":"a0c7c7998bf7dff1720dd0292699cc40349586d3904230e6b7e71f6628c6c2de"} Jan 24 08:35:15 crc kubenswrapper[4705]: I0124 08:35:15.632683 4705 generic.go:334] "Generic (PLEG): container finished" podID="ba41ca23-3541-4685-9dde-556d09c9a272" containerID="9258e61afd92f3f48a68e133897e467187095c596c6e553ee660960af00039d6" exitCode=0 Jan 24 08:35:15 crc kubenswrapper[4705]: I0124 08:35:15.632867 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szdtb" event={"ID":"ba41ca23-3541-4685-9dde-556d09c9a272","Type":"ContainerDied","Data":"9258e61afd92f3f48a68e133897e467187095c596c6e553ee660960af00039d6"} Jan 24 08:35:15 crc kubenswrapper[4705]: I0124 08:35:15.634935 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:35:16 crc kubenswrapper[4705]: I0124 08:35:16.642938 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szdtb" event={"ID":"ba41ca23-3541-4685-9dde-556d09c9a272","Type":"ContainerStarted","Data":"932ae09537e4e71e5eff156a5a9cd0cd111ae5ecc25a095da0c8c036c5f3f81e"} Jan 24 08:35:19 crc kubenswrapper[4705]: I0124 08:35:19.676942 4705 generic.go:334] "Generic (PLEG): container finished" podID="ba41ca23-3541-4685-9dde-556d09c9a272" containerID="932ae09537e4e71e5eff156a5a9cd0cd111ae5ecc25a095da0c8c036c5f3f81e" exitCode=0 Jan 24 08:35:19 crc kubenswrapper[4705]: I0124 08:35:19.676996 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szdtb" event={"ID":"ba41ca23-3541-4685-9dde-556d09c9a272","Type":"ContainerDied","Data":"932ae09537e4e71e5eff156a5a9cd0cd111ae5ecc25a095da0c8c036c5f3f81e"} Jan 24 08:35:20 crc kubenswrapper[4705]: I0124 08:35:20.687950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szdtb" event={"ID":"ba41ca23-3541-4685-9dde-556d09c9a272","Type":"ContainerStarted","Data":"7b86443d17ce16776890e3684980080d50ab924b6741340a130610f8f137f3ec"} Jan 24 08:35:23 crc kubenswrapper[4705]: I0124 08:35:23.940076 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:23 crc kubenswrapper[4705]: I0124 08:35:23.941716 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:24 crc kubenswrapper[4705]: I0124 08:35:24.990598 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-szdtb" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="registry-server" probeResult="failure" output=< Jan 24 08:35:24 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Jan 24 08:35:24 crc kubenswrapper[4705]: > Jan 24 08:35:25 crc kubenswrapper[4705]: I0124 08:35:25.576249 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:35:25 crc kubenswrapper[4705]: E0124 08:35:25.576859 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:35:33 crc kubenswrapper[4705]: I0124 08:35:33.987946 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:34 crc kubenswrapper[4705]: I0124 08:35:34.016716 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-szdtb" podStartSLOduration=16.503076401 podStartE2EDuration="21.016694186s" podCreationTimestamp="2026-01-24 08:35:13 +0000 UTC" firstStartedPulling="2026-01-24 08:35:15.634691883 +0000 UTC m=+3254.354565171" lastFinishedPulling="2026-01-24 08:35:20.148309668 +0000 UTC m=+3258.868182956" observedRunningTime="2026-01-24 08:35:20.712134099 +0000 UTC m=+3259.432007387" watchObservedRunningTime="2026-01-24 08:35:34.016694186 +0000 UTC m=+3272.736567474" Jan 24 08:35:34 crc kubenswrapper[4705]: I0124 08:35:34.045753 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:34 crc kubenswrapper[4705]: I0124 08:35:34.231179 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-szdtb"] Jan 24 08:35:35 crc kubenswrapper[4705]: I0124 08:35:35.819978 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-szdtb" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="registry-server" containerID="cri-o://7b86443d17ce16776890e3684980080d50ab924b6741340a130610f8f137f3ec" gracePeriod=2 Jan 24 08:35:36 crc kubenswrapper[4705]: I0124 08:35:36.830421 4705 generic.go:334] "Generic (PLEG): container finished" podID="ba41ca23-3541-4685-9dde-556d09c9a272" containerID="7b86443d17ce16776890e3684980080d50ab924b6741340a130610f8f137f3ec" exitCode=0 Jan 24 08:35:36 crc kubenswrapper[4705]: I0124 08:35:36.830476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szdtb" event={"ID":"ba41ca23-3541-4685-9dde-556d09c9a272","Type":"ContainerDied","Data":"7b86443d17ce16776890e3684980080d50ab924b6741340a130610f8f137f3ec"} Jan 24 08:35:36 crc kubenswrapper[4705]: I0124 08:35:36.830835 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szdtb" event={"ID":"ba41ca23-3541-4685-9dde-556d09c9a272","Type":"ContainerDied","Data":"a0c7c7998bf7dff1720dd0292699cc40349586d3904230e6b7e71f6628c6c2de"} Jan 24 08:35:36 crc kubenswrapper[4705]: I0124 08:35:36.830882 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0c7c7998bf7dff1720dd0292699cc40349586d3904230e6b7e71f6628c6c2de" Jan 24 08:35:36 crc kubenswrapper[4705]: I0124 08:35:36.897130 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.071848 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-catalog-content\") pod \"ba41ca23-3541-4685-9dde-556d09c9a272\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.071954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr7ds\" (UniqueName: \"kubernetes.io/projected/ba41ca23-3541-4685-9dde-556d09c9a272-kube-api-access-wr7ds\") pod \"ba41ca23-3541-4685-9dde-556d09c9a272\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.071993 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-utilities\") pod \"ba41ca23-3541-4685-9dde-556d09c9a272\" (UID: \"ba41ca23-3541-4685-9dde-556d09c9a272\") " Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.072659 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-utilities" (OuterVolumeSpecName: "utilities") pod "ba41ca23-3541-4685-9dde-556d09c9a272" (UID: "ba41ca23-3541-4685-9dde-556d09c9a272"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.079601 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba41ca23-3541-4685-9dde-556d09c9a272-kube-api-access-wr7ds" (OuterVolumeSpecName: "kube-api-access-wr7ds") pod "ba41ca23-3541-4685-9dde-556d09c9a272" (UID: "ba41ca23-3541-4685-9dde-556d09c9a272"). InnerVolumeSpecName "kube-api-access-wr7ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.175618 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr7ds\" (UniqueName: \"kubernetes.io/projected/ba41ca23-3541-4685-9dde-556d09c9a272-kube-api-access-wr7ds\") on node \"crc\" DevicePath \"\"" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.175666 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.189242 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba41ca23-3541-4685-9dde-556d09c9a272" (UID: "ba41ca23-3541-4685-9dde-556d09c9a272"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.277608 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba41ca23-3541-4685-9dde-556d09c9a272-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.838747 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szdtb" Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.869999 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-szdtb"] Jan 24 08:35:37 crc kubenswrapper[4705]: I0124 08:35:37.879786 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-szdtb"] Jan 24 08:35:39 crc kubenswrapper[4705]: I0124 08:35:39.577001 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:35:39 crc kubenswrapper[4705]: E0124 08:35:39.577300 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:35:39 crc kubenswrapper[4705]: I0124 08:35:39.589590 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" path="/var/lib/kubelet/pods/ba41ca23-3541-4685-9dde-556d09c9a272/volumes" Jan 24 08:35:52 crc kubenswrapper[4705]: I0124 08:35:52.575610 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:35:52 crc kubenswrapper[4705]: E0124 08:35:52.576425 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:36:04 crc kubenswrapper[4705]: I0124 08:36:04.577006 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:36:04 crc kubenswrapper[4705]: E0124 08:36:04.577890 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:36:19 crc kubenswrapper[4705]: I0124 08:36:19.576530 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:36:20 crc kubenswrapper[4705]: I0124 08:36:20.380228 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"3fa3be8e6420f4da812f5a765ecc9e11eb0505d6fe3b7bf40a938f2ed494500a"} Jan 24 08:37:02 crc kubenswrapper[4705]: I0124 08:37:02.498642 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.520806 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz"] Jan 24 08:37:15 crc kubenswrapper[4705]: E0124 08:37:15.521743 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="extract-utilities" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.521757 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="extract-utilities" Jan 24 08:37:15 crc kubenswrapper[4705]: E0124 08:37:15.521797 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="extract-content" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.521803 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="extract-content" Jan 24 08:37:15 crc kubenswrapper[4705]: E0124 08:37:15.521813 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="registry-server" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.521836 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="registry-server" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.525739 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba41ca23-3541-4685-9dde-556d09c9a272" containerName="registry-server" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.527295 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.529931 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.545771 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz"] Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.610156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.610210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7zb\" (UniqueName: \"kubernetes.io/projected/8d7c435c-74d0-498b-97ba-bdc522a1b144-kube-api-access-bv7zb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.610304 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.711789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.711899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7zb\" (UniqueName: \"kubernetes.io/projected/8d7c435c-74d0-498b-97ba-bdc522a1b144-kube-api-access-bv7zb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.712000 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.712345 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.712439 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.731934 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7zb\" (UniqueName: \"kubernetes.io/projected/8d7c435c-74d0-498b-97ba-bdc522a1b144-kube-api-access-bv7zb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:15 crc kubenswrapper[4705]: I0124 08:37:15.851523 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:16 crc kubenswrapper[4705]: I0124 08:37:16.287297 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz"] Jan 24 08:37:16 crc kubenswrapper[4705]: W0124 08:37:16.290017 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7c435c_74d0_498b_97ba_bdc522a1b144.slice/crio-d5edfbb33522925a4d79323e61bab9d98a118b3ee261105e9dc8a52e4483e7a0 WatchSource:0}: Error finding container d5edfbb33522925a4d79323e61bab9d98a118b3ee261105e9dc8a52e4483e7a0: Status 404 returned error can't find the container with id d5edfbb33522925a4d79323e61bab9d98a118b3ee261105e9dc8a52e4483e7a0 Jan 24 08:37:17 crc kubenswrapper[4705]: I0124 08:37:17.159330 4705 generic.go:334] "Generic (PLEG): container finished" podID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerID="109a7f6e5731ef0befe1d58cf4370e39963cd89e4a0a6098fca43657b9f5d77f" exitCode=0 Jan 24 08:37:17 crc kubenswrapper[4705]: I0124 08:37:17.159448 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" event={"ID":"8d7c435c-74d0-498b-97ba-bdc522a1b144","Type":"ContainerDied","Data":"109a7f6e5731ef0befe1d58cf4370e39963cd89e4a0a6098fca43657b9f5d77f"} Jan 24 08:37:17 crc kubenswrapper[4705]: I0124 08:37:17.159798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" event={"ID":"8d7c435c-74d0-498b-97ba-bdc522a1b144","Type":"ContainerStarted","Data":"d5edfbb33522925a4d79323e61bab9d98a118b3ee261105e9dc8a52e4483e7a0"} Jan 24 08:37:19 crc kubenswrapper[4705]: I0124 08:37:19.179360 4705 generic.go:334] "Generic (PLEG): container finished" podID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerID="e11498b014902bf33ae1bce6853b8cd05f73f0703d86cb637e00d03a5789b1b3" exitCode=0 Jan 24 08:37:19 crc kubenswrapper[4705]: I0124 08:37:19.179401 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" event={"ID":"8d7c435c-74d0-498b-97ba-bdc522a1b144","Type":"ContainerDied","Data":"e11498b014902bf33ae1bce6853b8cd05f73f0703d86cb637e00d03a5789b1b3"} Jan 24 08:37:20 crc kubenswrapper[4705]: I0124 08:37:20.190902 4705 generic.go:334] "Generic (PLEG): container finished" podID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerID="d939eccafb5db5ebb88c0c7d490a88e8e4bbc778f974c435532024bab16860bf" exitCode=0 Jan 24 08:37:20 crc kubenswrapper[4705]: I0124 08:37:20.191020 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" event={"ID":"8d7c435c-74d0-498b-97ba-bdc522a1b144","Type":"ContainerDied","Data":"d939eccafb5db5ebb88c0c7d490a88e8e4bbc778f974c435532024bab16860bf"} Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.536621 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.568915 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-bundle\") pod \"8d7c435c-74d0-498b-97ba-bdc522a1b144\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.569174 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv7zb\" (UniqueName: \"kubernetes.io/projected/8d7c435c-74d0-498b-97ba-bdc522a1b144-kube-api-access-bv7zb\") pod \"8d7c435c-74d0-498b-97ba-bdc522a1b144\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.569381 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-util\") pod \"8d7c435c-74d0-498b-97ba-bdc522a1b144\" (UID: \"8d7c435c-74d0-498b-97ba-bdc522a1b144\") " Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.572134 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-bundle" (OuterVolumeSpecName: "bundle") pod "8d7c435c-74d0-498b-97ba-bdc522a1b144" (UID: "8d7c435c-74d0-498b-97ba-bdc522a1b144"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.585346 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-util" (OuterVolumeSpecName: "util") pod "8d7c435c-74d0-498b-97ba-bdc522a1b144" (UID: "8d7c435c-74d0-498b-97ba-bdc522a1b144"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.585683 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7c435c-74d0-498b-97ba-bdc522a1b144-kube-api-access-bv7zb" (OuterVolumeSpecName: "kube-api-access-bv7zb") pod "8d7c435c-74d0-498b-97ba-bdc522a1b144" (UID: "8d7c435c-74d0-498b-97ba-bdc522a1b144"). InnerVolumeSpecName "kube-api-access-bv7zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.672002 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-util\") on node \"crc\" DevicePath \"\"" Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.672301 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d7c435c-74d0-498b-97ba-bdc522a1b144-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:37:21 crc kubenswrapper[4705]: I0124 08:37:21.672315 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv7zb\" (UniqueName: \"kubernetes.io/projected/8d7c435c-74d0-498b-97ba-bdc522a1b144-kube-api-access-bv7zb\") on node \"crc\" DevicePath \"\"" Jan 24 08:37:22 crc kubenswrapper[4705]: I0124 08:37:22.209303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" event={"ID":"8d7c435c-74d0-498b-97ba-bdc522a1b144","Type":"ContainerDied","Data":"d5edfbb33522925a4d79323e61bab9d98a118b3ee261105e9dc8a52e4483e7a0"} Jan 24 08:37:22 crc kubenswrapper[4705]: I0124 08:37:22.209342 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5edfbb33522925a4d79323e61bab9d98a118b3ee261105e9dc8a52e4483e7a0" Jan 24 08:37:22 crc kubenswrapper[4705]: I0124 08:37:22.209407 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.379334 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4"] Jan 24 08:37:33 crc kubenswrapper[4705]: E0124 08:37:33.380312 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerName="pull" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.380326 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerName="pull" Jan 24 08:37:33 crc kubenswrapper[4705]: E0124 08:37:33.380344 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerName="util" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.380349 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerName="util" Jan 24 08:37:33 crc kubenswrapper[4705]: E0124 08:37:33.380366 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerName="extract" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.380372 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerName="extract" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.380550 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7c435c-74d0-498b-97ba-bdc522a1b144" containerName="extract" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.381228 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.383103 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.383377 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-46wj4" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.386621 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.403987 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.430599 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfm69\" (UniqueName: \"kubernetes.io/projected/205fc2d4-b488-4221-b24e-c97e1447deb9-kube-api-access-pfm69\") pod \"obo-prometheus-operator-68bc856cb9-q6rz4\" (UID: \"205fc2d4-b488-4221-b24e-c97e1447deb9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.478822 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.484718 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.490407 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.490615 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qh4hk" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.511347 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.513141 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.531956 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.533041 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfm69\" (UniqueName: \"kubernetes.io/projected/205fc2d4-b488-4221-b24e-c97e1447deb9-kube-api-access-pfm69\") pod \"obo-prometheus-operator-68bc856cb9-q6rz4\" (UID: \"205fc2d4-b488-4221-b24e-c97e1447deb9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.552996 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.574968 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfm69\" (UniqueName: \"kubernetes.io/projected/205fc2d4-b488-4221-b24e-c97e1447deb9-kube-api-access-pfm69\") pod \"obo-prometheus-operator-68bc856cb9-q6rz4\" (UID: \"205fc2d4-b488-4221-b24e-c97e1447deb9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.635196 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3747c1cc-2cec-4baf-b6f2-14109753b841-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v\" (UID: \"3747c1cc-2cec-4baf-b6f2-14109753b841\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.635489 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b32cfae6-0b9f-4565-b802-c667cc6def0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf\" (UID: \"b32cfae6-0b9f-4565-b802-c667cc6def0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.635656 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b32cfae6-0b9f-4565-b802-c667cc6def0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf\" (UID: \"b32cfae6-0b9f-4565-b802-c667cc6def0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.636006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3747c1cc-2cec-4baf-b6f2-14109753b841-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v\" (UID: \"3747c1cc-2cec-4baf-b6f2-14109753b841\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.676463 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hjjgh"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.679881 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.689101 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-92xk6" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.689205 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.690732 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hjjgh"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.714167 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.739195 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b32cfae6-0b9f-4565-b802-c667cc6def0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf\" (UID: \"b32cfae6-0b9f-4565-b802-c667cc6def0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.739661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b32cfae6-0b9f-4565-b802-c667cc6def0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf\" (UID: \"b32cfae6-0b9f-4565-b802-c667cc6def0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.739773 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3747c1cc-2cec-4baf-b6f2-14109753b841-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v\" (UID: \"3747c1cc-2cec-4baf-b6f2-14109753b841\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.740125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3747c1cc-2cec-4baf-b6f2-14109753b841-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v\" (UID: \"3747c1cc-2cec-4baf-b6f2-14109753b841\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.745173 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3747c1cc-2cec-4baf-b6f2-14109753b841-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v\" (UID: \"3747c1cc-2cec-4baf-b6f2-14109753b841\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.746081 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b32cfae6-0b9f-4565-b802-c667cc6def0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf\" (UID: \"b32cfae6-0b9f-4565-b802-c667cc6def0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.759485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b32cfae6-0b9f-4565-b802-c667cc6def0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf\" (UID: \"b32cfae6-0b9f-4565-b802-c667cc6def0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.759919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3747c1cc-2cec-4baf-b6f2-14109753b841-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v\" (UID: \"3747c1cc-2cec-4baf-b6f2-14109753b841\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.805562 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.834432 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l6h9z"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.835918 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.836545 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.840209 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-7kcvs" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.841386 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/348d157c-9094-4f31-aadf-44f7a46f561b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hjjgh\" (UID: \"348d157c-9094-4f31-aadf-44f7a46f561b\") " pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.841570 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw44m\" (UniqueName: \"kubernetes.io/projected/348d157c-9094-4f31-aadf-44f7a46f561b-kube-api-access-jw44m\") pod \"observability-operator-59bdc8b94-hjjgh\" (UID: \"348d157c-9094-4f31-aadf-44f7a46f561b\") " pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.926519 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l6h9z"] Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.943460 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3da07060-d23f-4ecc-9a3c-9d659a0ab121-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l6h9z\" (UID: \"3da07060-d23f-4ecc-9a3c-9d659a0ab121\") " pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.943539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw44m\" (UniqueName: \"kubernetes.io/projected/348d157c-9094-4f31-aadf-44f7a46f561b-kube-api-access-jw44m\") pod \"observability-operator-59bdc8b94-hjjgh\" (UID: \"348d157c-9094-4f31-aadf-44f7a46f561b\") " pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.943585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb7dm\" (UniqueName: \"kubernetes.io/projected/3da07060-d23f-4ecc-9a3c-9d659a0ab121-kube-api-access-cb7dm\") pod \"perses-operator-5bf474d74f-l6h9z\" (UID: \"3da07060-d23f-4ecc-9a3c-9d659a0ab121\") " pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.943664 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/348d157c-9094-4f31-aadf-44f7a46f561b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hjjgh\" (UID: \"348d157c-9094-4f31-aadf-44f7a46f561b\") " pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.955177 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/348d157c-9094-4f31-aadf-44f7a46f561b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hjjgh\" (UID: \"348d157c-9094-4f31-aadf-44f7a46f561b\") " pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:33 crc kubenswrapper[4705]: I0124 08:37:33.980674 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw44m\" (UniqueName: \"kubernetes.io/projected/348d157c-9094-4f31-aadf-44f7a46f561b-kube-api-access-jw44m\") pod \"observability-operator-59bdc8b94-hjjgh\" (UID: \"348d157c-9094-4f31-aadf-44f7a46f561b\") " pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.046154 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3da07060-d23f-4ecc-9a3c-9d659a0ab121-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l6h9z\" (UID: \"3da07060-d23f-4ecc-9a3c-9d659a0ab121\") " pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.046240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb7dm\" (UniqueName: \"kubernetes.io/projected/3da07060-d23f-4ecc-9a3c-9d659a0ab121-kube-api-access-cb7dm\") pod \"perses-operator-5bf474d74f-l6h9z\" (UID: \"3da07060-d23f-4ecc-9a3c-9d659a0ab121\") " pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.047389 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3da07060-d23f-4ecc-9a3c-9d659a0ab121-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l6h9z\" (UID: \"3da07060-d23f-4ecc-9a3c-9d659a0ab121\") " pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.076799 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb7dm\" (UniqueName: \"kubernetes.io/projected/3da07060-d23f-4ecc-9a3c-9d659a0ab121-kube-api-access-cb7dm\") pod \"perses-operator-5bf474d74f-l6h9z\" (UID: \"3da07060-d23f-4ecc-9a3c-9d659a0ab121\") " pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.172246 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.316216 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:34 crc kubenswrapper[4705]: W0124 08:37:34.657984 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3747c1cc_2cec_4baf_b6f2_14109753b841.slice/crio-5fec2396e67330e8b5a096abd5c5a9e8fdf19421ac87777a97a5ba8cc09eb6cb WatchSource:0}: Error finding container 5fec2396e67330e8b5a096abd5c5a9e8fdf19421ac87777a97a5ba8cc09eb6cb: Status 404 returned error can't find the container with id 5fec2396e67330e8b5a096abd5c5a9e8fdf19421ac87777a97a5ba8cc09eb6cb Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.672554 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v"] Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.754662 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf"] Jan 24 08:37:34 crc kubenswrapper[4705]: I0124 08:37:34.810697 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4"] Jan 24 08:37:34 crc kubenswrapper[4705]: W0124 08:37:34.821675 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod205fc2d4_b488_4221_b24e_c97e1447deb9.slice/crio-680e1e9d67b4e24d386c550d05501e1e8e57e86072a61173015b2396be1e94a9 WatchSource:0}: Error finding container 680e1e9d67b4e24d386c550d05501e1e8e57e86072a61173015b2396be1e94a9: Status 404 returned error can't find the container with id 680e1e9d67b4e24d386c550d05501e1e8e57e86072a61173015b2396be1e94a9 Jan 24 08:37:35 crc kubenswrapper[4705]: I0124 08:37:35.122655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hjjgh"] Jan 24 08:37:35 crc kubenswrapper[4705]: I0124 08:37:35.135665 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l6h9z"] Jan 24 08:37:35 crc kubenswrapper[4705]: W0124 08:37:35.142954 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da07060_d23f_4ecc_9a3c_9d659a0ab121.slice/crio-7bf3bbd7fb2c1c35ec04ea70933c956c4398996d242e9b68aea6d0775c58ea77 WatchSource:0}: Error finding container 7bf3bbd7fb2c1c35ec04ea70933c956c4398996d242e9b68aea6d0775c58ea77: Status 404 returned error can't find the container with id 7bf3bbd7fb2c1c35ec04ea70933c956c4398996d242e9b68aea6d0775c58ea77 Jan 24 08:37:35 crc kubenswrapper[4705]: I0124 08:37:35.563838 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" event={"ID":"3da07060-d23f-4ecc-9a3c-9d659a0ab121","Type":"ContainerStarted","Data":"7bf3bbd7fb2c1c35ec04ea70933c956c4398996d242e9b68aea6d0775c58ea77"} Jan 24 08:37:35 crc kubenswrapper[4705]: I0124 08:37:35.565804 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" event={"ID":"205fc2d4-b488-4221-b24e-c97e1447deb9","Type":"ContainerStarted","Data":"680e1e9d67b4e24d386c550d05501e1e8e57e86072a61173015b2396be1e94a9"} Jan 24 08:37:35 crc kubenswrapper[4705]: I0124 08:37:35.567969 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" event={"ID":"348d157c-9094-4f31-aadf-44f7a46f561b","Type":"ContainerStarted","Data":"42f6d680aadbf7aff1cbc5776d4c23903b36bfb48b21a861af9eccfeba50e7d6"} Jan 24 08:37:35 crc kubenswrapper[4705]: I0124 08:37:35.569877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" event={"ID":"3747c1cc-2cec-4baf-b6f2-14109753b841","Type":"ContainerStarted","Data":"5fec2396e67330e8b5a096abd5c5a9e8fdf19421ac87777a97a5ba8cc09eb6cb"} Jan 24 08:37:35 crc kubenswrapper[4705]: I0124 08:37:35.571092 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" event={"ID":"b32cfae6-0b9f-4565-b802-c667cc6def0a","Type":"ContainerStarted","Data":"9532d199f6b03c3429aabbfcda3c74f79d1f1ebe8582aa6b3efb3751461b8e72"} Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.320255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" event={"ID":"3da07060-d23f-4ecc-9a3c-9d659a0ab121","Type":"ContainerStarted","Data":"814178b29b61e5a871d62823c0729f9e56c6c529e3f50731f4cfca599c06310f"} Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.320693 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.323349 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" event={"ID":"205fc2d4-b488-4221-b24e-c97e1447deb9","Type":"ContainerStarted","Data":"8a7f762da3fcca6aa00d8187b3c41a74ff1a873a90e0b33117c81e18302676b3"} Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.325022 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" event={"ID":"348d157c-9094-4f31-aadf-44f7a46f561b","Type":"ContainerStarted","Data":"b0065952d68481d075904e01393aae1ecc9495244be8bc9aab772203b77b0d1c"} Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.325193 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.327133 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" event={"ID":"3747c1cc-2cec-4baf-b6f2-14109753b841","Type":"ContainerStarted","Data":"ff255f76fffd9ae26d7813ec1b6293899c0886b3745adf42ab7896508b9267a8"} Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.328908 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" event={"ID":"b32cfae6-0b9f-4565-b802-c667cc6def0a","Type":"ContainerStarted","Data":"6f48f8bc17c49b8a6045df7cf6168d3ae4c0fa2dca38fc0f004fe030b02b4e38"} Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.342249 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.342340 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" podStartSLOduration=5.300902736 podStartE2EDuration="16.342302108s" podCreationTimestamp="2026-01-24 08:37:33 +0000 UTC" firstStartedPulling="2026-01-24 08:37:35.145734266 +0000 UTC m=+3393.865607554" lastFinishedPulling="2026-01-24 08:37:46.187133628 +0000 UTC m=+3404.907006926" observedRunningTime="2026-01-24 08:37:49.334486476 +0000 UTC m=+3408.054359764" watchObservedRunningTime="2026-01-24 08:37:49.342302108 +0000 UTC m=+3408.062175396" Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.361628 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf" podStartSLOduration=4.920587891 podStartE2EDuration="16.361609997s" podCreationTimestamp="2026-01-24 08:37:33 +0000 UTC" firstStartedPulling="2026-01-24 08:37:34.744992671 +0000 UTC m=+3393.464865959" lastFinishedPulling="2026-01-24 08:37:46.186014776 +0000 UTC m=+3404.905888065" observedRunningTime="2026-01-24 08:37:49.355676679 +0000 UTC m=+3408.075549977" watchObservedRunningTime="2026-01-24 08:37:49.361609997 +0000 UTC m=+3408.081483285" Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.476017 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v" podStartSLOduration=4.970372288 podStartE2EDuration="16.47597503s" podCreationTimestamp="2026-01-24 08:37:33 +0000 UTC" firstStartedPulling="2026-01-24 08:37:34.680330482 +0000 UTC m=+3393.400203760" lastFinishedPulling="2026-01-24 08:37:46.185933224 +0000 UTC m=+3404.905806502" observedRunningTime="2026-01-24 08:37:49.382029968 +0000 UTC m=+3408.101903256" watchObservedRunningTime="2026-01-24 08:37:49.47597503 +0000 UTC m=+3408.195848318" Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.515691 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-hjjgh" podStartSLOduration=5.322291085 podStartE2EDuration="16.515671248s" podCreationTimestamp="2026-01-24 08:37:33 +0000 UTC" firstStartedPulling="2026-01-24 08:37:35.133071406 +0000 UTC m=+3393.852944694" lastFinishedPulling="2026-01-24 08:37:46.326451569 +0000 UTC m=+3405.046324857" observedRunningTime="2026-01-24 08:37:49.510223153 +0000 UTC m=+3408.230096441" watchObservedRunningTime="2026-01-24 08:37:49.515671248 +0000 UTC m=+3408.235544536" Jan 24 08:37:49 crc kubenswrapper[4705]: I0124 08:37:49.551234 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q6rz4" podStartSLOduration=5.199711499 podStartE2EDuration="16.551213249s" podCreationTimestamp="2026-01-24 08:37:33 +0000 UTC" firstStartedPulling="2026-01-24 08:37:34.835178005 +0000 UTC m=+3393.555051283" lastFinishedPulling="2026-01-24 08:37:46.186679745 +0000 UTC m=+3404.906553033" observedRunningTime="2026-01-24 08:37:49.527246638 +0000 UTC m=+3408.247119926" watchObservedRunningTime="2026-01-24 08:37:49.551213249 +0000 UTC m=+3408.271086537" Jan 24 08:37:54 crc kubenswrapper[4705]: I0124 08:37:54.318753 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-l6h9z" Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.012187 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.013015 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-api" containerID="cri-o://4733392d0e8b6b83304edce8c62741e3c2cd02c2dcbc80a50e83321e50931efa" gracePeriod=30 Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.013095 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-listener" containerID="cri-o://509d159820b319cd0ad5cd7133dfbaf601cdd5aa0e98ccb0686009c9cc60f3c9" gracePeriod=30 Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.013111 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-notifier" containerID="cri-o://19c74d544773422095c338a3830bf3840932aa8197e513d405799042c562fd03" gracePeriod=30 Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.013607 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-evaluator" containerID="cri-o://aabba643082d6b2153cc8d470336f8167a3344652db41623a774d7f6cd9cdf26" gracePeriod=30 Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.596786 4705 generic.go:334] "Generic (PLEG): container finished" podID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerID="4733392d0e8b6b83304edce8c62741e3c2cd02c2dcbc80a50e83321e50931efa" exitCode=0 Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.597647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerDied","Data":"4733392d0e8b6b83304edce8c62741e3c2cd02c2dcbc80a50e83321e50931efa"} Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.993923 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.996621 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.998986 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.999434 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 24 08:38:01 crc kubenswrapper[4705]: I0124 08:38:01.999742 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.000043 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.000194 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-2cvsr" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.022910 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.117669 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.120184 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.122573 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.122671 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.122574 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.122731 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.122771 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.122839 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.124460 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-c9h62" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.127411 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wmdt\" (UniqueName: \"kubernetes.io/projected/2acfda0f-e35f-4215-8f97-dbb885b75b34-kube-api-access-7wmdt\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.127536 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.127571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2acfda0f-e35f-4215-8f97-dbb885b75b34-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.127600 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2acfda0f-e35f-4215-8f97-dbb885b75b34-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.127624 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.127694 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.127729 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2acfda0f-e35f-4215-8f97-dbb885b75b34-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.142515 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.147279 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.229397 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/896f4b8f-b641-48e3-96c6-628688a52534-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.229792 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.229846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.229888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2acfda0f-e35f-4215-8f97-dbb885b75b34-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.229910 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.229944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230008 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-config\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230045 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snv94\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-kube-api-access-snv94\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wmdt\" (UniqueName: \"kubernetes.io/projected/2acfda0f-e35f-4215-8f97-dbb885b75b34-kube-api-access-7wmdt\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230110 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230143 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230227 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230263 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2acfda0f-e35f-4215-8f97-dbb885b75b34-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230293 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2acfda0f-e35f-4215-8f97-dbb885b75b34-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230335 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230363 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.230388 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.231559 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2acfda0f-e35f-4215-8f97-dbb885b75b34-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.237013 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.237235 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.239114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2acfda0f-e35f-4215-8f97-dbb885b75b34-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.249875 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2acfda0f-e35f-4215-8f97-dbb885b75b34-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.251578 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wmdt\" (UniqueName: \"kubernetes.io/projected/2acfda0f-e35f-4215-8f97-dbb885b75b34-kube-api-access-7wmdt\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.260363 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2acfda0f-e35f-4215-8f97-dbb885b75b34-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"2acfda0f-e35f-4215-8f97-dbb885b75b34\") " pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.318299 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332715 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/896f4b8f-b641-48e3-96c6-628688a52534-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332761 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332815 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332855 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-config\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snv94\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-kube-api-access-snv94\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332906 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.332933 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.333368 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.333714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.333980 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.334408 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.337167 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.337605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/896f4b8f-b641-48e3-96c6-628688a52534-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.337642 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-config\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.342524 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.344291 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.356909 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snv94\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-kube-api-access-snv94\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.401200 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.447362 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.664461 4705 generic.go:334] "Generic (PLEG): container finished" podID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerID="aabba643082d6b2153cc8d470336f8167a3344652db41623a774d7f6cd9cdf26" exitCode=0 Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.664802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerDied","Data":"aabba643082d6b2153cc8d470336f8167a3344652db41623a774d7f6cd9cdf26"} Jan 24 08:38:02 crc kubenswrapper[4705]: W0124 08:38:02.928758 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acfda0f_e35f_4215_8f97_dbb885b75b34.slice/crio-87299f19ed9bebdddf1a977219724278053d232358f07d4f0f8fa7c0ec152e69 WatchSource:0}: Error finding container 87299f19ed9bebdddf1a977219724278053d232358f07d4f0f8fa7c0ec152e69: Status 404 returned error can't find the container with id 87299f19ed9bebdddf1a977219724278053d232358f07d4f0f8fa7c0ec152e69 Jan 24 08:38:02 crc kubenswrapper[4705]: I0124 08:38:02.931299 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 24 08:38:03 crc kubenswrapper[4705]: I0124 08:38:03.101246 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:03 crc kubenswrapper[4705]: W0124 08:38:03.120581 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod896f4b8f_b641_48e3_96c6_628688a52534.slice/crio-bede8cd169f0ceda992bc101cf8be52a450773dae64afbe804441c56353b3834 WatchSource:0}: Error finding container bede8cd169f0ceda992bc101cf8be52a450773dae64afbe804441c56353b3834: Status 404 returned error can't find the container with id bede8cd169f0ceda992bc101cf8be52a450773dae64afbe804441c56353b3834 Jan 24 08:38:03 crc kubenswrapper[4705]: I0124 08:38:03.675247 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerStarted","Data":"bede8cd169f0ceda992bc101cf8be52a450773dae64afbe804441c56353b3834"} Jan 24 08:38:03 crc kubenswrapper[4705]: I0124 08:38:03.677235 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"2acfda0f-e35f-4215-8f97-dbb885b75b34","Type":"ContainerStarted","Data":"87299f19ed9bebdddf1a977219724278053d232358f07d4f0f8fa7c0ec152e69"} Jan 24 08:38:03 crc kubenswrapper[4705]: I0124 08:38:03.680393 4705 generic.go:334] "Generic (PLEG): container finished" podID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerID="19c74d544773422095c338a3830bf3840932aa8197e513d405799042c562fd03" exitCode=0 Jan 24 08:38:03 crc kubenswrapper[4705]: I0124 08:38:03.680453 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerDied","Data":"19c74d544773422095c338a3830bf3840932aa8197e513d405799042c562fd03"} Jan 24 08:38:04 crc kubenswrapper[4705]: I0124 08:38:04.718054 4705 generic.go:334] "Generic (PLEG): container finished" podID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerID="509d159820b319cd0ad5cd7133dfbaf601cdd5aa0e98ccb0686009c9cc60f3c9" exitCode=0 Jan 24 08:38:04 crc kubenswrapper[4705]: I0124 08:38:04.718470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerDied","Data":"509d159820b319cd0ad5cd7133dfbaf601cdd5aa0e98ccb0686009c9cc60f3c9"} Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.140123 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.303084 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-public-tls-certs\") pod \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.303506 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvsc5\" (UniqueName: \"kubernetes.io/projected/a5b63e41-f8bd-4ac0-b23e-3716b5098194-kube-api-access-dvsc5\") pod \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.303706 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-config-data\") pod \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.303745 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-scripts\") pod \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.303793 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-combined-ca-bundle\") pod \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.303857 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-internal-tls-certs\") pod \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\" (UID: \"a5b63e41-f8bd-4ac0-b23e-3716b5098194\") " Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.310982 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b63e41-f8bd-4ac0-b23e-3716b5098194-kube-api-access-dvsc5" (OuterVolumeSpecName: "kube-api-access-dvsc5") pod "a5b63e41-f8bd-4ac0-b23e-3716b5098194" (UID: "a5b63e41-f8bd-4ac0-b23e-3716b5098194"). InnerVolumeSpecName "kube-api-access-dvsc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.312444 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-scripts" (OuterVolumeSpecName: "scripts") pod "a5b63e41-f8bd-4ac0-b23e-3716b5098194" (UID: "a5b63e41-f8bd-4ac0-b23e-3716b5098194"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.388339 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a5b63e41-f8bd-4ac0-b23e-3716b5098194" (UID: "a5b63e41-f8bd-4ac0-b23e-3716b5098194"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.391336 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a5b63e41-f8bd-4ac0-b23e-3716b5098194" (UID: "a5b63e41-f8bd-4ac0-b23e-3716b5098194"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.406919 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.406958 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvsc5\" (UniqueName: \"kubernetes.io/projected/a5b63e41-f8bd-4ac0-b23e-3716b5098194-kube-api-access-dvsc5\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.406975 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.406987 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.497073 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-config-data" (OuterVolumeSpecName: "config-data") pod "a5b63e41-f8bd-4ac0-b23e-3716b5098194" (UID: "a5b63e41-f8bd-4ac0-b23e-3716b5098194"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.506981 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5b63e41-f8bd-4ac0-b23e-3716b5098194" (UID: "a5b63e41-f8bd-4ac0-b23e-3716b5098194"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.510053 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.510102 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b63e41-f8bd-4ac0-b23e-3716b5098194-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.729184 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a5b63e41-f8bd-4ac0-b23e-3716b5098194","Type":"ContainerDied","Data":"df5a0218be6960182ae34c3f93369ad42da048df7d1f3834996a1bc668f46852"} Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.729247 4705 scope.go:117] "RemoveContainer" containerID="509d159820b319cd0ad5cd7133dfbaf601cdd5aa0e98ccb0686009c9cc60f3c9" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.730014 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.755129 4705 scope.go:117] "RemoveContainer" containerID="19c74d544773422095c338a3830bf3840932aa8197e513d405799042c562fd03" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.762011 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.783345 4705 scope.go:117] "RemoveContainer" containerID="aabba643082d6b2153cc8d470336f8167a3344652db41623a774d7f6cd9cdf26" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.785277 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.793961 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:05 crc kubenswrapper[4705]: E0124 08:38:05.794579 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-listener" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794601 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-listener" Jan 24 08:38:05 crc kubenswrapper[4705]: E0124 08:38:05.794623 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-notifier" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794629 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-notifier" Jan 24 08:38:05 crc kubenswrapper[4705]: E0124 08:38:05.794650 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-evaluator" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794656 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-evaluator" Jan 24 08:38:05 crc kubenswrapper[4705]: E0124 08:38:05.794669 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-api" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794675 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-api" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794907 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-listener" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794928 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-api" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794940 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-notifier" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.794950 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" containerName="aodh-evaluator" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.796945 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.800327 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.800583 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.800740 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.800870 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-q9ztl" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.801650 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.802737 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.809100 4705 scope.go:117] "RemoveContainer" containerID="4733392d0e8b6b83304edce8c62741e3c2cd02c2dcbc80a50e83321e50931efa" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.919487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-internal-tls-certs\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.919561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.919620 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-scripts\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.919676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-config-data\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.919996 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6pt\" (UniqueName: \"kubernetes.io/projected/8b4450b3-41c6-4da2-8abc-832f47666516-kube-api-access-dw6pt\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:05 crc kubenswrapper[4705]: I0124 08:38:05.920261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-public-tls-certs\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.022157 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6pt\" (UniqueName: \"kubernetes.io/projected/8b4450b3-41c6-4da2-8abc-832f47666516-kube-api-access-dw6pt\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.022287 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-public-tls-certs\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.022376 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-internal-tls-certs\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.022432 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.022467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-scripts\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.022494 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-config-data\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.027300 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-internal-tls-certs\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.027778 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-scripts\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.028107 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.034401 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-public-tls-certs\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.039760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-config-data\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.045241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6pt\" (UniqueName: \"kubernetes.io/projected/8b4450b3-41c6-4da2-8abc-832f47666516-kube-api-access-dw6pt\") pod \"aodh-0\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.127059 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.690601 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:06 crc kubenswrapper[4705]: I0124 08:38:06.743261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerStarted","Data":"8ae56bd1c2ddf660a64ef1656448fc48781be90f9bac74f2bf45e74d6aba2ebc"} Jan 24 08:38:07 crc kubenswrapper[4705]: I0124 08:38:07.758842 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b63e41-f8bd-4ac0-b23e-3716b5098194" path="/var/lib/kubelet/pods/a5b63e41-f8bd-4ac0-b23e-3716b5098194/volumes" Jan 24 08:38:08 crc kubenswrapper[4705]: I0124 08:38:08.771971 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"2acfda0f-e35f-4215-8f97-dbb885b75b34","Type":"ContainerStarted","Data":"2facefa2a1fbe5f44d6ae47790beefee128ab5d77b882c759a5457e9452f6270"} Jan 24 08:38:08 crc kubenswrapper[4705]: I0124 08:38:08.773727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerStarted","Data":"a53200a50a7736aa011ce96c811f6f82c6b1d46f6712a6edbc485c2952a4da22"} Jan 24 08:38:09 crc kubenswrapper[4705]: I0124 08:38:09.801452 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerStarted","Data":"df51a3aceaa9076b238cb6daf455db3486d0ddabbad0e016e87184f4417c745f"} Jan 24 08:38:09 crc kubenswrapper[4705]: I0124 08:38:09.828120 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerStarted","Data":"2ab5dbed6d74d017b85212b627aaa5ef9d5748256e652e38979a4f004bb6407d"} Jan 24 08:38:10 crc kubenswrapper[4705]: I0124 08:38:10.843303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerStarted","Data":"f328dcbf47581001f5f33ec280ff071aec244c0ed0f98c4b805cb8f85db1588f"} Jan 24 08:38:11 crc kubenswrapper[4705]: I0124 08:38:11.856784 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerStarted","Data":"8956a89a11cb15cd366f1061e23eccfc9b2a4126db227cc9f8a46d0d87256add"} Jan 24 08:38:11 crc kubenswrapper[4705]: I0124 08:38:11.894667 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.745611982 podStartE2EDuration="6.894644864s" podCreationTimestamp="2026-01-24 08:38:05 +0000 UTC" firstStartedPulling="2026-01-24 08:38:06.692705903 +0000 UTC m=+3425.412579191" lastFinishedPulling="2026-01-24 08:38:10.841738745 +0000 UTC m=+3429.561612073" observedRunningTime="2026-01-24 08:38:11.888572042 +0000 UTC m=+3430.608445400" watchObservedRunningTime="2026-01-24 08:38:11.894644864 +0000 UTC m=+3430.614518162" Jan 24 08:38:14 crc kubenswrapper[4705]: I0124 08:38:14.895107 4705 generic.go:334] "Generic (PLEG): container finished" podID="2acfda0f-e35f-4215-8f97-dbb885b75b34" containerID="2facefa2a1fbe5f44d6ae47790beefee128ab5d77b882c759a5457e9452f6270" exitCode=0 Jan 24 08:38:14 crc kubenswrapper[4705]: I0124 08:38:14.895699 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"2acfda0f-e35f-4215-8f97-dbb885b75b34","Type":"ContainerDied","Data":"2facefa2a1fbe5f44d6ae47790beefee128ab5d77b882c759a5457e9452f6270"} Jan 24 08:38:15 crc kubenswrapper[4705]: I0124 08:38:15.909414 4705 generic.go:334] "Generic (PLEG): container finished" podID="896f4b8f-b641-48e3-96c6-628688a52534" containerID="df51a3aceaa9076b238cb6daf455db3486d0ddabbad0e016e87184f4417c745f" exitCode=0 Jan 24 08:38:15 crc kubenswrapper[4705]: I0124 08:38:15.909455 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerDied","Data":"df51a3aceaa9076b238cb6daf455db3486d0ddabbad0e016e87184f4417c745f"} Jan 24 08:38:17 crc kubenswrapper[4705]: I0124 08:38:17.929330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"2acfda0f-e35f-4215-8f97-dbb885b75b34","Type":"ContainerStarted","Data":"f4754f042187c30555a08edc2f5969daa3bdbd35b85e56e33063221ce9237b4c"} Jan 24 08:38:23 crc kubenswrapper[4705]: I0124 08:38:23.993614 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerStarted","Data":"ccc0d47caf1c78b6a9469fc0e92477febd566c11253fe2803a58f78594dadfca"} Jan 24 08:38:23 crc kubenswrapper[4705]: I0124 08:38:23.997674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"2acfda0f-e35f-4215-8f97-dbb885b75b34","Type":"ContainerStarted","Data":"da0acf6bfc6dca1e164a899ccc52faf42f4f51bf57b1ee6bb72a43b1845e07df"} Jan 24 08:38:23 crc kubenswrapper[4705]: I0124 08:38:23.998262 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:24 crc kubenswrapper[4705]: I0124 08:38:24.001552 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 24 08:38:24 crc kubenswrapper[4705]: I0124 08:38:24.023800 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=8.80803196 podStartE2EDuration="23.023781598s" podCreationTimestamp="2026-01-24 08:38:01 +0000 UTC" firstStartedPulling="2026-01-24 08:38:02.931723886 +0000 UTC m=+3421.651597174" lastFinishedPulling="2026-01-24 08:38:17.147473534 +0000 UTC m=+3435.867346812" observedRunningTime="2026-01-24 08:38:24.020149955 +0000 UTC m=+3442.740023283" watchObservedRunningTime="2026-01-24 08:38:24.023781598 +0000 UTC m=+3442.743654886" Jan 24 08:38:27 crc kubenswrapper[4705]: I0124 08:38:27.057558 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerStarted","Data":"314ebc5cecc031e782621d4c950035ef1fe917e0975be19122317af6ddedc425"} Jan 24 08:38:32 crc kubenswrapper[4705]: I0124 08:38:32.166019 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerStarted","Data":"5b484558054f04110e48babde9e0094006cf75f09b35732be150ad79d5854b43"} Jan 24 08:38:32 crc kubenswrapper[4705]: I0124 08:38:32.206804 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.11098957 podStartE2EDuration="31.20678566s" podCreationTimestamp="2026-01-24 08:38:01 +0000 UTC" firstStartedPulling="2026-01-24 08:38:03.122872662 +0000 UTC m=+3421.842745950" lastFinishedPulling="2026-01-24 08:38:31.218668752 +0000 UTC m=+3449.938542040" observedRunningTime="2026-01-24 08:38:32.201646114 +0000 UTC m=+3450.921519412" watchObservedRunningTime="2026-01-24 08:38:32.20678566 +0000 UTC m=+3450.926658948" Jan 24 08:38:32 crc kubenswrapper[4705]: I0124 08:38:32.448640 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:32 crc kubenswrapper[4705]: I0124 08:38:32.448709 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:32 crc kubenswrapper[4705]: I0124 08:38:32.450888 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:33 crc kubenswrapper[4705]: I0124 08:38:33.177542 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.072535 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.073080 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" containerName="openstackclient" containerID="cri-o://19b1e4f3352b3c97e8b170f878cdffd963cef92743f71a3b49da9139d2ac4da0" gracePeriod=2 Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.084737 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.118249 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: E0124 08:38:35.118848 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" containerName="openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.118872 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" containerName="openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.119098 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" containerName="openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.120366 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.138517 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" podUID="d146f337-883b-4294-851d-d87f08803a98" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.143291 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.160202 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d146f337-883b-4294-851d-d87f08803a98-openstack-config\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.160272 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-openstack-config-secret\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.160308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.160324 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwq5f\" (UniqueName: \"kubernetes.io/projected/d146f337-883b-4294-851d-d87f08803a98-kube-api-access-dwq5f\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.177513 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: E0124 08:38:35.178611 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-dwq5f openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="d146f337-883b-4294-851d-d87f08803a98" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.198714 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.198964 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.203467 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d146f337-883b-4294-851d-d87f08803a98" podUID="5bf2f8d1-1a23-4328-9169-1dea01964d94" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.221595 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.224838 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.228253 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.251472 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.262090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d146f337-883b-4294-851d-d87f08803a98-openstack-config\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.262154 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-openstack-config-secret\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.262182 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwq5f\" (UniqueName: \"kubernetes.io/projected/d146f337-883b-4294-851d-d87f08803a98-kube-api-access-dwq5f\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.262199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.262949 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d146f337-883b-4294-851d-d87f08803a98-openstack-config\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: E0124 08:38:35.263855 4705 projected.go:194] Error preparing data for projected volume kube-api-access-dwq5f for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (d146f337-883b-4294-851d-d87f08803a98) does not match the UID in record. The object might have been deleted and then recreated Jan 24 08:38:35 crc kubenswrapper[4705]: E0124 08:38:35.263946 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d146f337-883b-4294-851d-d87f08803a98-kube-api-access-dwq5f podName:d146f337-883b-4294-851d-d87f08803a98 nodeName:}" failed. No retries permitted until 2026-01-24 08:38:35.763917182 +0000 UTC m=+3454.483790550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dwq5f" (UniqueName: "kubernetes.io/projected/d146f337-883b-4294-851d-d87f08803a98-kube-api-access-dwq5f") pod "openstackclient" (UID: "d146f337-883b-4294-851d-d87f08803a98") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (d146f337-883b-4294-851d-d87f08803a98) does not match the UID in record. The object might have been deleted and then recreated Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.270787 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-openstack-config-secret\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.273442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.363405 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d146f337-883b-4294-851d-d87f08803a98-openstack-config\") pod \"d146f337-883b-4294-851d-d87f08803a98\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.363865 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d146f337-883b-4294-851d-d87f08803a98-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d146f337-883b-4294-851d-d87f08803a98" (UID: "d146f337-883b-4294-851d-d87f08803a98"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.364038 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-combined-ca-bundle\") pod \"d146f337-883b-4294-851d-d87f08803a98\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.364094 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-openstack-config-secret\") pod \"d146f337-883b-4294-851d-d87f08803a98\" (UID: \"d146f337-883b-4294-851d-d87f08803a98\") " Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.364562 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dgfs\" (UniqueName: \"kubernetes.io/projected/5bf2f8d1-1a23-4328-9169-1dea01964d94-kube-api-access-6dgfs\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.364675 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5bf2f8d1-1a23-4328-9169-1dea01964d94-openstack-config-secret\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.364797 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5bf2f8d1-1a23-4328-9169-1dea01964d94-openstack-config\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.364928 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf2f8d1-1a23-4328-9169-1dea01964d94-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.364982 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwq5f\" (UniqueName: \"kubernetes.io/projected/d146f337-883b-4294-851d-d87f08803a98-kube-api-access-dwq5f\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.365008 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d146f337-883b-4294-851d-d87f08803a98-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.370049 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d146f337-883b-4294-851d-d87f08803a98" (UID: "d146f337-883b-4294-851d-d87f08803a98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.372378 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d146f337-883b-4294-851d-d87f08803a98" (UID: "d146f337-883b-4294-851d-d87f08803a98"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.466971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dgfs\" (UniqueName: \"kubernetes.io/projected/5bf2f8d1-1a23-4328-9169-1dea01964d94-kube-api-access-6dgfs\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.467050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5bf2f8d1-1a23-4328-9169-1dea01964d94-openstack-config-secret\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.467127 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5bf2f8d1-1a23-4328-9169-1dea01964d94-openstack-config\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.467205 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf2f8d1-1a23-4328-9169-1dea01964d94-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.467282 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.467295 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d146f337-883b-4294-851d-d87f08803a98-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.468437 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5bf2f8d1-1a23-4328-9169-1dea01964d94-openstack-config\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.473523 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf2f8d1-1a23-4328-9169-1dea01964d94-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.487086 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dgfs\" (UniqueName: \"kubernetes.io/projected/5bf2f8d1-1a23-4328-9169-1dea01964d94-kube-api-access-6dgfs\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.488318 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5bf2f8d1-1a23-4328-9169-1dea01964d94-openstack-config-secret\") pod \"openstackclient\" (UID: \"5bf2f8d1-1a23-4328-9169-1dea01964d94\") " pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.528350 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.528761 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-listener" containerID="cri-o://8956a89a11cb15cd366f1061e23eccfc9b2a4126db227cc9f8a46d0d87256add" gracePeriod=30 Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.528860 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-notifier" containerID="cri-o://f328dcbf47581001f5f33ec280ff071aec244c0ed0f98c4b805cb8f85db1588f" gracePeriod=30 Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.528903 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-evaluator" containerID="cri-o://2ab5dbed6d74d017b85212b627aaa5ef9d5748256e652e38979a4f004bb6407d" gracePeriod=30 Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.528725 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-api" containerID="cri-o://a53200a50a7736aa011ce96c811f6f82c6b1d46f6712a6edbc485c2952a4da22" gracePeriod=30 Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.558797 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:35 crc kubenswrapper[4705]: I0124 08:38:35.604348 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d146f337-883b-4294-851d-d87f08803a98" path="/var/lib/kubelet/pods/d146f337-883b-4294-851d-d87f08803a98/volumes" Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.213695 4705 generic.go:334] "Generic (PLEG): container finished" podID="8b4450b3-41c6-4da2-8abc-832f47666516" containerID="2ab5dbed6d74d017b85212b627aaa5ef9d5748256e652e38979a4f004bb6407d" exitCode=0 Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.214051 4705 generic.go:334] "Generic (PLEG): container finished" podID="8b4450b3-41c6-4da2-8abc-832f47666516" containerID="a53200a50a7736aa011ce96c811f6f82c6b1d46f6712a6edbc485c2952a4da22" exitCode=0 Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.214116 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.213755 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerDied","Data":"2ab5dbed6d74d017b85212b627aaa5ef9d5748256e652e38979a4f004bb6407d"} Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.214539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerDied","Data":"a53200a50a7736aa011ce96c811f6f82c6b1d46f6712a6edbc485c2952a4da22"} Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.222975 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d146f337-883b-4294-851d-d87f08803a98" podUID="5bf2f8d1-1a23-4328-9169-1dea01964d94" Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.243534 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 08:38:36 crc kubenswrapper[4705]: W0124 08:38:36.246977 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bf2f8d1_1a23_4328_9169_1dea01964d94.slice/crio-0b0f470dd3f9534d26fef17f8183b314896853bdb57188b8b2b8268d141f53fe WatchSource:0}: Error finding container 0b0f470dd3f9534d26fef17f8183b314896853bdb57188b8b2b8268d141f53fe: Status 404 returned error can't find the container with id 0b0f470dd3f9534d26fef17f8183b314896853bdb57188b8b2b8268d141f53fe Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.419171 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.419453 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="prometheus" containerID="cri-o://ccc0d47caf1c78b6a9469fc0e92477febd566c11253fe2803a58f78594dadfca" gracePeriod=600 Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.419607 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="thanos-sidecar" containerID="cri-o://5b484558054f04110e48babde9e0094006cf75f09b35732be150ad79d5854b43" gracePeriod=600 Jan 24 08:38:36 crc kubenswrapper[4705]: I0124 08:38:36.419631 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="config-reloader" containerID="cri-o://314ebc5cecc031e782621d4c950035ef1fe917e0975be19122317af6ddedc425" gracePeriod=600 Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.071527 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.071642 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.229796 4705 generic.go:334] "Generic (PLEG): container finished" podID="896f4b8f-b641-48e3-96c6-628688a52534" containerID="5b484558054f04110e48babde9e0094006cf75f09b35732be150ad79d5854b43" exitCode=0 Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.230114 4705 generic.go:334] "Generic (PLEG): container finished" podID="896f4b8f-b641-48e3-96c6-628688a52534" containerID="314ebc5cecc031e782621d4c950035ef1fe917e0975be19122317af6ddedc425" exitCode=0 Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.230124 4705 generic.go:334] "Generic (PLEG): container finished" podID="896f4b8f-b641-48e3-96c6-628688a52534" containerID="ccc0d47caf1c78b6a9469fc0e92477febd566c11253fe2803a58f78594dadfca" exitCode=0 Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.229869 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerDied","Data":"5b484558054f04110e48babde9e0094006cf75f09b35732be150ad79d5854b43"} Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.230180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerDied","Data":"314ebc5cecc031e782621d4c950035ef1fe917e0975be19122317af6ddedc425"} Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.230195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerDied","Data":"ccc0d47caf1c78b6a9469fc0e92477febd566c11253fe2803a58f78594dadfca"} Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.232923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5bf2f8d1-1a23-4328-9169-1dea01964d94","Type":"ContainerStarted","Data":"9b8a5a90d33ba70080a47e25deefedab6f17d327d502e15bd7798c3d7b668a55"} Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.232947 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5bf2f8d1-1a23-4328-9169-1dea01964d94","Type":"ContainerStarted","Data":"0b0f470dd3f9534d26fef17f8183b314896853bdb57188b8b2b8268d141f53fe"} Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.243335 4705 generic.go:334] "Generic (PLEG): container finished" podID="fed21d33-a27e-43a4-b5aa-7d3c25375467" containerID="19b1e4f3352b3c97e8b170f878cdffd963cef92743f71a3b49da9139d2ac4da0" exitCode=137 Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.260294 4705 generic.go:334] "Generic (PLEG): container finished" podID="8b4450b3-41c6-4da2-8abc-832f47666516" containerID="8956a89a11cb15cd366f1061e23eccfc9b2a4126db227cc9f8a46d0d87256add" exitCode=0 Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.260339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerDied","Data":"8956a89a11cb15cd366f1061e23eccfc9b2a4126db227cc9f8a46d0d87256add"} Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.282496 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.282468431 podStartE2EDuration="2.282468431s" podCreationTimestamp="2026-01-24 08:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:38:37.256311077 +0000 UTC m=+3455.976184365" watchObservedRunningTime="2026-01-24 08:38:37.282468431 +0000 UTC m=+3456.002341729" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.561532 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.582029 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.663525 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-2\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.663673 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb27j\" (UniqueName: \"kubernetes.io/projected/fed21d33-a27e-43a4-b5aa-7d3c25375467-kube-api-access-mb27j\") pod \"fed21d33-a27e-43a4-b5aa-7d3c25375467\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.663746 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-0\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.663879 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/896f4b8f-b641-48e3-96c6-628688a52534-config-out\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.663912 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config\") pod \"fed21d33-a27e-43a4-b5aa-7d3c25375467\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664102 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-web-config\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664171 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-1\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664213 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snv94\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-kube-api-access-snv94\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664209 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664250 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-thanos-prometheus-http-client-file\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664277 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-tls-assets\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664302 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664321 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config-secret\") pod \"fed21d33-a27e-43a4-b5aa-7d3c25375467\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664409 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-combined-ca-bundle\") pod \"fed21d33-a27e-43a4-b5aa-7d3c25375467\" (UID: \"fed21d33-a27e-43a4-b5aa-7d3c25375467\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664471 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-config\") pod \"896f4b8f-b641-48e3-96c6-628688a52534\" (UID: \"896f4b8f-b641-48e3-96c6-628688a52534\") " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.664478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.665429 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.665456 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.665471 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/896f4b8f-b641-48e3-96c6-628688a52534-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.679722 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-kube-api-access-snv94" (OuterVolumeSpecName: "kube-api-access-snv94") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "kube-api-access-snv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.680074 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/896f4b8f-b641-48e3-96c6-628688a52534-config-out" (OuterVolumeSpecName: "config-out") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.681363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.682063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.689206 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-config" (OuterVolumeSpecName: "config") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.689418 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed21d33-a27e-43a4-b5aa-7d3c25375467-kube-api-access-mb27j" (OuterVolumeSpecName: "kube-api-access-mb27j") pod "fed21d33-a27e-43a4-b5aa-7d3c25375467" (UID: "fed21d33-a27e-43a4-b5aa-7d3c25375467"). InnerVolumeSpecName "kube-api-access-mb27j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.689574 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.716067 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fed21d33-a27e-43a4-b5aa-7d3c25375467" (UID: "fed21d33-a27e-43a4-b5aa-7d3c25375467"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.728081 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-web-config" (OuterVolumeSpecName: "web-config") pod "896f4b8f-b641-48e3-96c6-628688a52534" (UID: "896f4b8f-b641-48e3-96c6-628688a52534"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.743700 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fed21d33-a27e-43a4-b5aa-7d3c25375467" (UID: "fed21d33-a27e-43a4-b5aa-7d3c25375467"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768374 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768420 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb27j\" (UniqueName: \"kubernetes.io/projected/fed21d33-a27e-43a4-b5aa-7d3c25375467-kube-api-access-mb27j\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768440 4705 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/896f4b8f-b641-48e3-96c6-628688a52534-config-out\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768456 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768469 4705 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-web-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768482 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snv94\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-kube-api-access-snv94\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768494 4705 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/896f4b8f-b641-48e3-96c6-628688a52534-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768507 4705 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/896f4b8f-b641-48e3-96c6-628688a52534-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768563 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.768576 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.772063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fed21d33-a27e-43a4-b5aa-7d3c25375467" (UID: "fed21d33-a27e-43a4-b5aa-7d3c25375467"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.794601 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.870367 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:37 crc kubenswrapper[4705]: I0124 08:38:37.870400 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fed21d33-a27e-43a4-b5aa-7d3c25375467-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.278904 4705 scope.go:117] "RemoveContainer" containerID="19b1e4f3352b3c97e8b170f878cdffd963cef92743f71a3b49da9139d2ac4da0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.279037 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.287420 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"896f4b8f-b641-48e3-96c6-628688a52534","Type":"ContainerDied","Data":"bede8cd169f0ceda992bc101cf8be52a450773dae64afbe804441c56353b3834"} Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.287528 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.395037 4705 scope.go:117] "RemoveContainer" containerID="5b484558054f04110e48babde9e0094006cf75f09b35732be150ad79d5854b43" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.417929 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.483776 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.487370 4705 scope.go:117] "RemoveContainer" containerID="314ebc5cecc031e782621d4c950035ef1fe917e0975be19122317af6ddedc425" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.494365 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:38 crc kubenswrapper[4705]: E0124 08:38:38.494884 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="prometheus" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.494903 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="prometheus" Jan 24 08:38:38 crc kubenswrapper[4705]: E0124 08:38:38.494917 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="thanos-sidecar" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.494925 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="thanos-sidecar" Jan 24 08:38:38 crc kubenswrapper[4705]: E0124 08:38:38.494946 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="init-config-reloader" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.494968 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="init-config-reloader" Jan 24 08:38:38 crc kubenswrapper[4705]: E0124 08:38:38.494984 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="config-reloader" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.494991 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="config-reloader" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.495198 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="prometheus" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.495219 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="thanos-sidecar" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.495234 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="config-reloader" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.497019 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.504958 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.505370 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.505570 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.505752 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.505988 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.506802 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.507044 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-c9h62" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.508228 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.515920 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.538254 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.564109 4705 scope.go:117] "RemoveContainer" containerID="ccc0d47caf1c78b6a9469fc0e92477febd566c11253fe2803a58f78594dadfca" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.633052 4705 scope.go:117] "RemoveContainer" containerID="df51a3aceaa9076b238cb6daf455db3486d0ddabbad0e016e87184f4417c745f" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688353 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688422 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4bd\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-kube-api-access-sn4bd\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688473 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688500 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688533 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e4bccb-4c85-438c-97dd-e47341b379e5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688704 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688768 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.688971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.689176 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.791145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.791490 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.791679 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.792451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.792721 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.792832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.792959 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.793172 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.793311 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4bd\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-kube-api-access-sn4bd\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.793469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.793562 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.793674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.793827 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e4bccb-4c85-438c-97dd-e47341b379e5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.794392 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.797488 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.798340 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.799372 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.799638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.799758 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.800125 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.800467 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.802133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.802612 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e4bccb-4c85-438c-97dd-e47341b379e5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.803485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.805375 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.818605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4bd\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-kube-api-access-sn4bd\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.831288 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"prometheus-metric-storage-0\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:38 crc kubenswrapper[4705]: I0124 08:38:38.845926 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:38:39 crc kubenswrapper[4705]: W0124 08:38:39.342036 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5e4bccb_4c85_438c_97dd_e47341b379e5.slice/crio-3e611684257308962fa4bcc6f5efe27b0bd1f74b98dd3f12c596707ad29db00a WatchSource:0}: Error finding container 3e611684257308962fa4bcc6f5efe27b0bd1f74b98dd3f12c596707ad29db00a: Status 404 returned error can't find the container with id 3e611684257308962fa4bcc6f5efe27b0bd1f74b98dd3f12c596707ad29db00a Jan 24 08:38:39 crc kubenswrapper[4705]: I0124 08:38:39.344041 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:38:39 crc kubenswrapper[4705]: I0124 08:38:39.602359 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896f4b8f-b641-48e3-96c6-628688a52534" path="/var/lib/kubelet/pods/896f4b8f-b641-48e3-96c6-628688a52534/volumes" Jan 24 08:38:39 crc kubenswrapper[4705]: I0124 08:38:39.603629 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed21d33-a27e-43a4-b5aa-7d3c25375467" path="/var/lib/kubelet/pods/fed21d33-a27e-43a4-b5aa-7d3c25375467/volumes" Jan 24 08:38:40 crc kubenswrapper[4705]: I0124 08:38:40.309749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerStarted","Data":"3e611684257308962fa4bcc6f5efe27b0bd1f74b98dd3f12c596707ad29db00a"} Jan 24 08:38:40 crc kubenswrapper[4705]: I0124 08:38:40.448423 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="896f4b8f-b641-48e3-96c6-628688a52534" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.14:9090/-/ready\": dial tcp 10.217.1.14:9090: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.322209 4705 generic.go:334] "Generic (PLEG): container finished" podID="8b4450b3-41c6-4da2-8abc-832f47666516" containerID="f328dcbf47581001f5f33ec280ff071aec244c0ed0f98c4b805cb8f85db1588f" exitCode=0 Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.322349 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerDied","Data":"f328dcbf47581001f5f33ec280ff071aec244c0ed0f98c4b805cb8f85db1588f"} Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.322515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8b4450b3-41c6-4da2-8abc-832f47666516","Type":"ContainerDied","Data":"8ae56bd1c2ddf660a64ef1656448fc48781be90f9bac74f2bf45e74d6aba2ebc"} Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.322528 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ae56bd1c2ddf660a64ef1656448fc48781be90f9bac74f2bf45e74d6aba2ebc" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.490513 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.654534 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-combined-ca-bundle\") pod \"8b4450b3-41c6-4da2-8abc-832f47666516\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.654721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-config-data\") pod \"8b4450b3-41c6-4da2-8abc-832f47666516\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.654760 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-scripts\") pod \"8b4450b3-41c6-4da2-8abc-832f47666516\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.654811 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-internal-tls-certs\") pod \"8b4450b3-41c6-4da2-8abc-832f47666516\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.654890 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw6pt\" (UniqueName: \"kubernetes.io/projected/8b4450b3-41c6-4da2-8abc-832f47666516-kube-api-access-dw6pt\") pod \"8b4450b3-41c6-4da2-8abc-832f47666516\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.654955 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-public-tls-certs\") pod \"8b4450b3-41c6-4da2-8abc-832f47666516\" (UID: \"8b4450b3-41c6-4da2-8abc-832f47666516\") " Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.847661 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-scripts" (OuterVolumeSpecName: "scripts") pod "8b4450b3-41c6-4da2-8abc-832f47666516" (UID: "8b4450b3-41c6-4da2-8abc-832f47666516"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.848412 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b4450b3-41c6-4da2-8abc-832f47666516" (UID: "8b4450b3-41c6-4da2-8abc-832f47666516"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.859338 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.859375 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.938359 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4450b3-41c6-4da2-8abc-832f47666516-kube-api-access-dw6pt" (OuterVolumeSpecName: "kube-api-access-dw6pt") pod "8b4450b3-41c6-4da2-8abc-832f47666516" (UID: "8b4450b3-41c6-4da2-8abc-832f47666516"). InnerVolumeSpecName "kube-api-access-dw6pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:38:41 crc kubenswrapper[4705]: I0124 08:38:41.961575 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw6pt\" (UniqueName: \"kubernetes.io/projected/8b4450b3-41c6-4da2-8abc-832f47666516-kube-api-access-dw6pt\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.042278 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8b4450b3-41c6-4da2-8abc-832f47666516" (UID: "8b4450b3-41c6-4da2-8abc-832f47666516"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.063426 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.246324 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b4450b3-41c6-4da2-8abc-832f47666516" (UID: "8b4450b3-41c6-4da2-8abc-832f47666516"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.270099 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.330973 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.775321 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-config-data" (OuterVolumeSpecName: "config-data") pod "8b4450b3-41c6-4da2-8abc-832f47666516" (UID: "8b4450b3-41c6-4da2-8abc-832f47666516"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.778547 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b4450b3-41c6-4da2-8abc-832f47666516-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.970494 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.979205 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.990508 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:42 crc kubenswrapper[4705]: E0124 08:38:42.990926 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-api" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.990938 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-api" Jan 24 08:38:42 crc kubenswrapper[4705]: E0124 08:38:42.990957 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-notifier" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.990963 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-notifier" Jan 24 08:38:42 crc kubenswrapper[4705]: E0124 08:38:42.990980 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-evaluator" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.990987 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-evaluator" Jan 24 08:38:42 crc kubenswrapper[4705]: E0124 08:38:42.991009 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-listener" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.991018 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-listener" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.991183 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-evaluator" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.991196 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-api" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.991211 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-notifier" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.991218 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" containerName="aodh-listener" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.993056 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.995311 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.995571 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.995643 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.999430 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-q9ztl" Jan 24 08:38:42 crc kubenswrapper[4705]: I0124 08:38:42.999624 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.009050 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.085980 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5p7t\" (UniqueName: \"kubernetes.io/projected/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-kube-api-access-b5p7t\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.086027 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-internal-tls-certs\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.086099 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-scripts\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.086146 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-public-tls-certs\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.086203 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-config-data\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.086227 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.188692 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5p7t\" (UniqueName: \"kubernetes.io/projected/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-kube-api-access-b5p7t\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.188756 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-internal-tls-certs\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.188880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-scripts\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.188946 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-public-tls-certs\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.189031 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-config-data\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.189057 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.193992 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-config-data\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.194582 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-scripts\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.194800 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-internal-tls-certs\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.198711 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.205643 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-public-tls-certs\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.210521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5p7t\" (UniqueName: \"kubernetes.io/projected/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-kube-api-access-b5p7t\") pod \"aodh-0\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.359441 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.591329 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b4450b3-41c6-4da2-8abc-832f47666516" path="/var/lib/kubelet/pods/8b4450b3-41c6-4da2-8abc-832f47666516/volumes" Jan 24 08:38:43 crc kubenswrapper[4705]: I0124 08:38:43.851993 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:38:44 crc kubenswrapper[4705]: I0124 08:38:44.352830 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerStarted","Data":"d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3"} Jan 24 08:38:44 crc kubenswrapper[4705]: I0124 08:38:44.353800 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerStarted","Data":"035e3e365b86e4d6a678017e4acbe7b26c052b8d201c4a809317acc34a6c029a"} Jan 24 08:38:45 crc kubenswrapper[4705]: I0124 08:38:45.366111 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerStarted","Data":"168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae"} Jan 24 08:38:45 crc kubenswrapper[4705]: I0124 08:38:45.366661 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerStarted","Data":"57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8"} Jan 24 08:38:46 crc kubenswrapper[4705]: I0124 08:38:46.380015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerStarted","Data":"5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70"} Jan 24 08:38:47 crc kubenswrapper[4705]: I0124 08:38:47.392500 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerStarted","Data":"25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0"} Jan 24 08:38:47 crc kubenswrapper[4705]: I0124 08:38:47.420384 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.732475198 podStartE2EDuration="5.420363341s" podCreationTimestamp="2026-01-24 08:38:42 +0000 UTC" firstStartedPulling="2026-01-24 08:38:43.857958791 +0000 UTC m=+3462.577832079" lastFinishedPulling="2026-01-24 08:38:46.545846934 +0000 UTC m=+3465.265720222" observedRunningTime="2026-01-24 08:38:47.416720077 +0000 UTC m=+3466.136593385" watchObservedRunningTime="2026-01-24 08:38:47.420363341 +0000 UTC m=+3466.140236629" Jan 24 08:38:51 crc kubenswrapper[4705]: I0124 08:38:51.430246 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerID="d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3" exitCode=0 Jan 24 08:38:51 crc kubenswrapper[4705]: I0124 08:38:51.430338 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerDied","Data":"d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3"} Jan 24 08:38:52 crc kubenswrapper[4705]: I0124 08:38:52.443607 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerStarted","Data":"6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08"} Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.724647 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-whdpb"] Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.727594 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.734463 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whdpb"] Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.842959 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-utilities\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.843174 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-catalog-content\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.843243 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6dw2\" (UniqueName: \"kubernetes.io/projected/cb588191-355b-423b-9c6d-467f91929771-kube-api-access-h6dw2\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.944884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-catalog-content\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.944961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6dw2\" (UniqueName: \"kubernetes.io/projected/cb588191-355b-423b-9c6d-467f91929771-kube-api-access-h6dw2\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.945057 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-utilities\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.945506 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-utilities\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:53 crc kubenswrapper[4705]: I0124 08:38:53.945741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-catalog-content\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:54 crc kubenswrapper[4705]: I0124 08:38:54.139371 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6dw2\" (UniqueName: \"kubernetes.io/projected/cb588191-355b-423b-9c6d-467f91929771-kube-api-access-h6dw2\") pod \"community-operators-whdpb\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:54 crc kubenswrapper[4705]: I0124 08:38:54.363631 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:38:54 crc kubenswrapper[4705]: I0124 08:38:54.710113 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whdpb"] Jan 24 08:38:54 crc kubenswrapper[4705]: W0124 08:38:54.710453 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb588191_355b_423b_9c6d_467f91929771.slice/crio-19b152628830e43a1b1586d44bdbd8b954a4a7953e9614af1ee5422b5b04970a WatchSource:0}: Error finding container 19b152628830e43a1b1586d44bdbd8b954a4a7953e9614af1ee5422b5b04970a: Status 404 returned error can't find the container with id 19b152628830e43a1b1586d44bdbd8b954a4a7953e9614af1ee5422b5b04970a Jan 24 08:38:55 crc kubenswrapper[4705]: I0124 08:38:55.470956 4705 generic.go:334] "Generic (PLEG): container finished" podID="cb588191-355b-423b-9c6d-467f91929771" containerID="80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa" exitCode=0 Jan 24 08:38:55 crc kubenswrapper[4705]: I0124 08:38:55.471089 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whdpb" event={"ID":"cb588191-355b-423b-9c6d-467f91929771","Type":"ContainerDied","Data":"80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa"} Jan 24 08:38:55 crc kubenswrapper[4705]: I0124 08:38:55.471235 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whdpb" event={"ID":"cb588191-355b-423b-9c6d-467f91929771","Type":"ContainerStarted","Data":"19b152628830e43a1b1586d44bdbd8b954a4a7953e9614af1ee5422b5b04970a"} Jan 24 08:38:55 crc kubenswrapper[4705]: I0124 08:38:55.475799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerStarted","Data":"ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522"} Jan 24 08:38:56 crc kubenswrapper[4705]: I0124 08:38:56.501873 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerStarted","Data":"376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e"} Jan 24 08:38:56 crc kubenswrapper[4705]: I0124 08:38:56.505862 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whdpb" event={"ID":"cb588191-355b-423b-9c6d-467f91929771","Type":"ContainerStarted","Data":"c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671"} Jan 24 08:38:56 crc kubenswrapper[4705]: I0124 08:38:56.531210 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.531192686 podStartE2EDuration="18.531192686s" podCreationTimestamp="2026-01-24 08:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:38:56.527567993 +0000 UTC m=+3475.247441291" watchObservedRunningTime="2026-01-24 08:38:56.531192686 +0000 UTC m=+3475.251065984" Jan 24 08:38:57 crc kubenswrapper[4705]: I0124 08:38:57.517936 4705 generic.go:334] "Generic (PLEG): container finished" podID="cb588191-355b-423b-9c6d-467f91929771" containerID="c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671" exitCode=0 Jan 24 08:38:57 crc kubenswrapper[4705]: I0124 08:38:57.518017 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whdpb" event={"ID":"cb588191-355b-423b-9c6d-467f91929771","Type":"ContainerDied","Data":"c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671"} Jan 24 08:38:58 crc kubenswrapper[4705]: I0124 08:38:58.565906 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whdpb" event={"ID":"cb588191-355b-423b-9c6d-467f91929771","Type":"ContainerStarted","Data":"4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a"} Jan 24 08:38:58 crc kubenswrapper[4705]: I0124 08:38:58.591342 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-whdpb" podStartSLOduration=3.068270852 podStartE2EDuration="5.591315577s" podCreationTimestamp="2026-01-24 08:38:53 +0000 UTC" firstStartedPulling="2026-01-24 08:38:55.474564279 +0000 UTC m=+3474.194437567" lastFinishedPulling="2026-01-24 08:38:57.997608994 +0000 UTC m=+3476.717482292" observedRunningTime="2026-01-24 08:38:58.585897263 +0000 UTC m=+3477.305770571" watchObservedRunningTime="2026-01-24 08:38:58.591315577 +0000 UTC m=+3477.311188885" Jan 24 08:38:58 crc kubenswrapper[4705]: I0124 08:38:58.847013 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 24 08:39:04 crc kubenswrapper[4705]: I0124 08:39:04.364799 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:39:04 crc kubenswrapper[4705]: I0124 08:39:04.365471 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:39:04 crc kubenswrapper[4705]: I0124 08:39:04.410816 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:39:04 crc kubenswrapper[4705]: I0124 08:39:04.736372 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:39:04 crc kubenswrapper[4705]: I0124 08:39:04.800411 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whdpb"] Jan 24 08:39:06 crc kubenswrapper[4705]: I0124 08:39:06.697791 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-whdpb" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="registry-server" containerID="cri-o://4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a" gracePeriod=2 Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.070998 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.071344 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.208463 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.374788 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-utilities\") pod \"cb588191-355b-423b-9c6d-467f91929771\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.375200 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6dw2\" (UniqueName: \"kubernetes.io/projected/cb588191-355b-423b-9c6d-467f91929771-kube-api-access-h6dw2\") pod \"cb588191-355b-423b-9c6d-467f91929771\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.375429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-catalog-content\") pod \"cb588191-355b-423b-9c6d-467f91929771\" (UID: \"cb588191-355b-423b-9c6d-467f91929771\") " Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.375697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-utilities" (OuterVolumeSpecName: "utilities") pod "cb588191-355b-423b-9c6d-467f91929771" (UID: "cb588191-355b-423b-9c6d-467f91929771"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.376163 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.381659 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb588191-355b-423b-9c6d-467f91929771-kube-api-access-h6dw2" (OuterVolumeSpecName: "kube-api-access-h6dw2") pod "cb588191-355b-423b-9c6d-467f91929771" (UID: "cb588191-355b-423b-9c6d-467f91929771"). InnerVolumeSpecName "kube-api-access-h6dw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.430464 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb588191-355b-423b-9c6d-467f91929771" (UID: "cb588191-355b-423b-9c6d-467f91929771"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.478586 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6dw2\" (UniqueName: \"kubernetes.io/projected/cb588191-355b-423b-9c6d-467f91929771-kube-api-access-h6dw2\") on node \"crc\" DevicePath \"\"" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.478624 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb588191-355b-423b-9c6d-467f91929771-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.711334 4705 generic.go:334] "Generic (PLEG): container finished" podID="cb588191-355b-423b-9c6d-467f91929771" containerID="4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a" exitCode=0 Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.711381 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whdpb" event={"ID":"cb588191-355b-423b-9c6d-467f91929771","Type":"ContainerDied","Data":"4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a"} Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.711425 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whdpb" event={"ID":"cb588191-355b-423b-9c6d-467f91929771","Type":"ContainerDied","Data":"19b152628830e43a1b1586d44bdbd8b954a4a7953e9614af1ee5422b5b04970a"} Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.711460 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whdpb" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.711498 4705 scope.go:117] "RemoveContainer" containerID="4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.763034 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whdpb"] Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.765762 4705 scope.go:117] "RemoveContainer" containerID="c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.776161 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-whdpb"] Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.793636 4705 scope.go:117] "RemoveContainer" containerID="80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.846702 4705 scope.go:117] "RemoveContainer" containerID="4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a" Jan 24 08:39:07 crc kubenswrapper[4705]: E0124 08:39:07.847469 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a\": container with ID starting with 4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a not found: ID does not exist" containerID="4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.847603 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a"} err="failed to get container status \"4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a\": rpc error: code = NotFound desc = could not find container \"4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a\": container with ID starting with 4cdc443defcdbc4d73dfda6a9d22ae2e641282da53d980a0d50406e810ea438a not found: ID does not exist" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.847655 4705 scope.go:117] "RemoveContainer" containerID="c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671" Jan 24 08:39:07 crc kubenswrapper[4705]: E0124 08:39:07.848146 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671\": container with ID starting with c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671 not found: ID does not exist" containerID="c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.848186 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671"} err="failed to get container status \"c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671\": rpc error: code = NotFound desc = could not find container \"c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671\": container with ID starting with c5bad189fd7301e56ab2220fe8e51ab3c03727b275cc1cba5607b59df14d0671 not found: ID does not exist" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.848200 4705 scope.go:117] "RemoveContainer" containerID="80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa" Jan 24 08:39:07 crc kubenswrapper[4705]: E0124 08:39:07.848709 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa\": container with ID starting with 80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa not found: ID does not exist" containerID="80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa" Jan 24 08:39:07 crc kubenswrapper[4705]: I0124 08:39:07.848801 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa"} err="failed to get container status \"80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa\": rpc error: code = NotFound desc = could not find container \"80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa\": container with ID starting with 80b46bcaf5c8117b6325e5192824c16d8be669fafd72af6415f8cf4faa605ffa not found: ID does not exist" Jan 24 08:39:08 crc kubenswrapper[4705]: I0124 08:39:08.847053 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 24 08:39:08 crc kubenswrapper[4705]: I0124 08:39:08.854232 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 24 08:39:09 crc kubenswrapper[4705]: I0124 08:39:09.603058 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb588191-355b-423b-9c6d-467f91929771" path="/var/lib/kubelet/pods/cb588191-355b-423b-9c6d-467f91929771/volumes" Jan 24 08:39:09 crc kubenswrapper[4705]: I0124 08:39:09.747870 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 24 08:39:37 crc kubenswrapper[4705]: I0124 08:39:37.071730 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:39:37 crc kubenswrapper[4705]: I0124 08:39:37.072570 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:39:37 crc kubenswrapper[4705]: I0124 08:39:37.072711 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:39:37 crc kubenswrapper[4705]: I0124 08:39:37.099917 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fa3be8e6420f4da812f5a765ecc9e11eb0505d6fe3b7bf40a938f2ed494500a"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:39:37 crc kubenswrapper[4705]: I0124 08:39:37.099996 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://3fa3be8e6420f4da812f5a765ecc9e11eb0505d6fe3b7bf40a938f2ed494500a" gracePeriod=600 Jan 24 08:39:38 crc kubenswrapper[4705]: I0124 08:39:38.113216 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="3fa3be8e6420f4da812f5a765ecc9e11eb0505d6fe3b7bf40a938f2ed494500a" exitCode=0 Jan 24 08:39:38 crc kubenswrapper[4705]: I0124 08:39:38.113337 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"3fa3be8e6420f4da812f5a765ecc9e11eb0505d6fe3b7bf40a938f2ed494500a"} Jan 24 08:39:38 crc kubenswrapper[4705]: I0124 08:39:38.113737 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73"} Jan 24 08:39:38 crc kubenswrapper[4705]: I0124 08:39:38.113780 4705 scope.go:117] "RemoveContainer" containerID="28f70bfca33ff096658041890eb987fd412302f0822d0177dabfde55c2e973ff" Jan 24 08:40:38 crc kubenswrapper[4705]: I0124 08:40:38.818738 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:40:41 crc kubenswrapper[4705]: I0124 08:40:41.039895 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:40:41 crc kubenswrapper[4705]: I0124 08:40:41.040588 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="prometheus" containerID="cri-o://6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08" gracePeriod=600 Jan 24 08:40:41 crc kubenswrapper[4705]: I0124 08:40:41.041355 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="config-reloader" containerID="cri-o://ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522" gracePeriod=600 Jan 24 08:40:41 crc kubenswrapper[4705]: I0124 08:40:41.041362 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="thanos-sidecar" containerID="cri-o://376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e" gracePeriod=600 Jan 24 08:40:42 crc kubenswrapper[4705]: I0124 08:40:42.012435 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerID="376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e" exitCode=0 Jan 24 08:40:42 crc kubenswrapper[4705]: I0124 08:40:42.012789 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerID="6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08" exitCode=0 Jan 24 08:40:42 crc kubenswrapper[4705]: I0124 08:40:42.012525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerDied","Data":"376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e"} Jan 24 08:40:42 crc kubenswrapper[4705]: I0124 08:40:42.012860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerDied","Data":"6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08"} Jan 24 08:40:43 crc kubenswrapper[4705]: I0124 08:40:43.846644 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.18:9090/-/ready\": dial tcp 10.217.1.18:9090: connect: connection refused" Jan 24 08:40:48 crc kubenswrapper[4705]: I0124 08:40:48.847344 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.18:9090/-/ready\": dial tcp 10.217.1.18:9090: connect: connection refused" Jan 24 08:40:53 crc kubenswrapper[4705]: I0124 08:40:53.847254 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.18:9090/-/ready\": dial tcp 10.217.1.18:9090: connect: connection refused" Jan 24 08:40:53 crc kubenswrapper[4705]: I0124 08:40:53.847864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.827999 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970580 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e4bccb-4c85-438c-97dd-e47341b379e5-config-out\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970667 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-thanos-prometheus-http-client-file\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970724 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-secret-combined-ca-bundle\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970762 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970787 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-0\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970842 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970873 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn4bd\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-kube-api-access-sn4bd\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970905 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-2\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.970942 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-tls-assets\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.971249 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.971281 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-1\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.971370 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-config\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.971456 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config\") pod \"e5e4bccb-4c85-438c-97dd-e47341b379e5\" (UID: \"e5e4bccb-4c85-438c-97dd-e47341b379e5\") " Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.972842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.972856 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.974453 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.978310 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.978382 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-config" (OuterVolumeSpecName: "config") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.979407 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.980088 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.980320 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-kube-api-access-sn4bd" (OuterVolumeSpecName: "kube-api-access-sn4bd") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "kube-api-access-sn4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.980558 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.981660 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.986641 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e4bccb-4c85-438c-97dd-e47341b379e5-config-out" (OuterVolumeSpecName: "config-out") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:40:56 crc kubenswrapper[4705]: I0124 08:40:56.987633 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.065217 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config" (OuterVolumeSpecName: "web-config") pod "e5e4bccb-4c85-438c-97dd-e47341b379e5" (UID: "e5e4bccb-4c85-438c-97dd-e47341b379e5"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.073966 4705 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074016 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074033 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074044 4705 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074058 4705 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e4bccb-4c85-438c-97dd-e47341b379e5-config-out\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074069 4705 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074088 4705 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074145 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074162 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074178 4705 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e5e4bccb-4c85-438c-97dd-e47341b379e5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074194 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn4bd\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-kube-api-access-sn4bd\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074206 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e4bccb-4c85-438c-97dd-e47341b379e5-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.074218 4705 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e4bccb-4c85-438c-97dd-e47341b379e5-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.096604 4705 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.156647 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerID="ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522" exitCode=0 Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.156697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerDied","Data":"ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522"} Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.156726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e4bccb-4c85-438c-97dd-e47341b379e5","Type":"ContainerDied","Data":"3e611684257308962fa4bcc6f5efe27b0bd1f74b98dd3f12c596707ad29db00a"} Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.156743 4705 scope.go:117] "RemoveContainer" containerID="376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.156978 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.175785 4705 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.199921 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.202725 4705 scope.go:117] "RemoveContainer" containerID="ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.211632 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.227987 4705 scope.go:117] "RemoveContainer" containerID="6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.254988 4705 scope.go:117] "RemoveContainer" containerID="d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.292758 4705 scope.go:117] "RemoveContainer" containerID="376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e" Jan 24 08:40:57 crc kubenswrapper[4705]: E0124 08:40:57.293329 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e\": container with ID starting with 376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e not found: ID does not exist" containerID="376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.293383 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e"} err="failed to get container status \"376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e\": rpc error: code = NotFound desc = could not find container \"376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e\": container with ID starting with 376c1ba9a9a10cc06a9e6de1cebc1516ede5fcf253d4ff9b3e06c4c2a582bb0e not found: ID does not exist" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.293406 4705 scope.go:117] "RemoveContainer" containerID="ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522" Jan 24 08:40:57 crc kubenswrapper[4705]: E0124 08:40:57.293842 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522\": container with ID starting with ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522 not found: ID does not exist" containerID="ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.293972 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522"} err="failed to get container status \"ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522\": rpc error: code = NotFound desc = could not find container \"ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522\": container with ID starting with ad7e2de8ff786a17799930a5b08f6e1fd473f566424e0e34f9985119f045d522 not found: ID does not exist" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.294084 4705 scope.go:117] "RemoveContainer" containerID="6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08" Jan 24 08:40:57 crc kubenswrapper[4705]: E0124 08:40:57.294637 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08\": container with ID starting with 6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08 not found: ID does not exist" containerID="6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.294669 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08"} err="failed to get container status \"6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08\": rpc error: code = NotFound desc = could not find container \"6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08\": container with ID starting with 6e3fe5e9638f66f1c564e527286a236c45fd9302675b4d05c7757c5f1641ae08 not found: ID does not exist" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.294689 4705 scope.go:117] "RemoveContainer" containerID="d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3" Jan 24 08:40:57 crc kubenswrapper[4705]: E0124 08:40:57.294969 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3\": container with ID starting with d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3 not found: ID does not exist" containerID="d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.295084 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3"} err="failed to get container status \"d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3\": rpc error: code = NotFound desc = could not find container \"d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3\": container with ID starting with d017e327c216b36222cab50471fb824aab10f78e29544650b9905f514e8f22e3 not found: ID does not exist" Jan 24 08:40:57 crc kubenswrapper[4705]: I0124 08:40:57.586519 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" path="/var/lib/kubelet/pods/e5e4bccb-4c85-438c-97dd-e47341b379e5/volumes" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.073639 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:40:58 crc kubenswrapper[4705]: E0124 08:40:58.074457 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="extract-utilities" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.074486 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="extract-utilities" Jan 24 08:40:58 crc kubenswrapper[4705]: E0124 08:40:58.074524 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="registry-server" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.074538 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="registry-server" Jan 24 08:40:58 crc kubenswrapper[4705]: E0124 08:40:58.074576 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="init-config-reloader" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.074592 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="init-config-reloader" Jan 24 08:40:58 crc kubenswrapper[4705]: E0124 08:40:58.074624 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="prometheus" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.074637 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="prometheus" Jan 24 08:40:58 crc kubenswrapper[4705]: E0124 08:40:58.074664 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="extract-content" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.074679 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="extract-content" Jan 24 08:40:58 crc kubenswrapper[4705]: E0124 08:40:58.074698 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="thanos-sidecar" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.074711 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="thanos-sidecar" Jan 24 08:40:58 crc kubenswrapper[4705]: E0124 08:40:58.074733 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="config-reloader" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.074747 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="config-reloader" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.075227 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb588191-355b-423b-9c6d-467f91929771" containerName="registry-server" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.075300 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="config-reloader" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.075332 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="prometheus" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.075357 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4bccb-4c85-438c-97dd-e47341b379e5" containerName="thanos-sidecar" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.079102 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.083792 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.083913 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-c9h62" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.083879 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.086063 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.086553 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.086635 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.086934 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.087162 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.093306 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.095056 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196441 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196658 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196730 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196758 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196787 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-config\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196896 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhmt2\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-kube-api-access-zhmt2\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196927 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.196968 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.197002 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.197026 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299210 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299270 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299298 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299370 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299400 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299428 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-config\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299466 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299522 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhmt2\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-kube-api-access-zhmt2\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299608 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299666 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.299721 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.301140 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.301784 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.302717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.304182 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.305547 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.305766 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.307577 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.308100 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.309176 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.309891 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.322926 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.323785 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhmt2\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-kube-api-access-zhmt2\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.325134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-config\") pod \"prometheus-metric-storage-0\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.403568 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:40:58 crc kubenswrapper[4705]: I0124 08:40:58.906556 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:40:59 crc kubenswrapper[4705]: I0124 08:40:59.181406 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerStarted","Data":"83f82212becd1d2a52d744b7e863d3824a3a3a66a09eebfa141dc42c6fd71d9e"} Jan 24 08:41:03 crc kubenswrapper[4705]: I0124 08:41:03.222312 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerStarted","Data":"3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975"} Jan 24 08:41:11 crc kubenswrapper[4705]: I0124 08:41:11.313421 4705 generic.go:334] "Generic (PLEG): container finished" podID="216859b8-4276-48fb-8695-6f78f29561b1" containerID="3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975" exitCode=0 Jan 24 08:41:11 crc kubenswrapper[4705]: I0124 08:41:11.313544 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerDied","Data":"3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975"} Jan 24 08:41:12 crc kubenswrapper[4705]: I0124 08:41:12.325214 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerStarted","Data":"813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56"} Jan 24 08:41:16 crc kubenswrapper[4705]: I0124 08:41:16.369388 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerStarted","Data":"51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9"} Jan 24 08:41:16 crc kubenswrapper[4705]: I0124 08:41:16.370008 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerStarted","Data":"c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f"} Jan 24 08:41:16 crc kubenswrapper[4705]: I0124 08:41:16.419947 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.419925399 podStartE2EDuration="18.419925399s" podCreationTimestamp="2026-01-24 08:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:41:16.406893529 +0000 UTC m=+3615.126766907" watchObservedRunningTime="2026-01-24 08:41:16.419925399 +0000 UTC m=+3615.139798687" Jan 24 08:41:18 crc kubenswrapper[4705]: I0124 08:41:18.405328 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 24 08:41:28 crc kubenswrapper[4705]: I0124 08:41:28.405237 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 24 08:41:28 crc kubenswrapper[4705]: I0124 08:41:28.414700 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 24 08:41:28 crc kubenswrapper[4705]: I0124 08:41:28.671481 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 24 08:41:37 crc kubenswrapper[4705]: I0124 08:41:37.071268 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:41:37 crc kubenswrapper[4705]: I0124 08:41:37.071634 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.727244 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fg9zj"] Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.730394 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.739720 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg9zj"] Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.862377 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-utilities\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.862437 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-catalog-content\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.862514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xf8\" (UniqueName: \"kubernetes.io/projected/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-kube-api-access-b2xf8\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.990936 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-utilities\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.991044 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-catalog-content\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.991274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2xf8\" (UniqueName: \"kubernetes.io/projected/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-kube-api-access-b2xf8\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.991492 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-utilities\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:57 crc kubenswrapper[4705]: I0124 08:41:57.991546 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-catalog-content\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:58 crc kubenswrapper[4705]: I0124 08:41:58.013994 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2xf8\" (UniqueName: \"kubernetes.io/projected/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-kube-api-access-b2xf8\") pod \"certified-operators-fg9zj\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:58 crc kubenswrapper[4705]: I0124 08:41:58.058538 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:41:58 crc kubenswrapper[4705]: I0124 08:41:58.544986 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg9zj"] Jan 24 08:41:59 crc kubenswrapper[4705]: I0124 08:41:59.039577 4705 generic.go:334] "Generic (PLEG): container finished" podID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerID="406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3" exitCode=0 Jan 24 08:41:59 crc kubenswrapper[4705]: I0124 08:41:59.039680 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9zj" event={"ID":"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426","Type":"ContainerDied","Data":"406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3"} Jan 24 08:41:59 crc kubenswrapper[4705]: I0124 08:41:59.039941 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9zj" event={"ID":"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426","Type":"ContainerStarted","Data":"d2b56e8ac3e61b91073c250e2e3ca83cc61d0fb3a08cdda8bc2e40397086e21f"} Jan 24 08:41:59 crc kubenswrapper[4705]: I0124 08:41:59.041634 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:42:00 crc kubenswrapper[4705]: I0124 08:42:00.055895 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9zj" event={"ID":"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426","Type":"ContainerStarted","Data":"6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0"} Jan 24 08:42:01 crc kubenswrapper[4705]: I0124 08:42:01.065485 4705 generic.go:334] "Generic (PLEG): container finished" podID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerID="6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0" exitCode=0 Jan 24 08:42:01 crc kubenswrapper[4705]: I0124 08:42:01.065665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9zj" event={"ID":"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426","Type":"ContainerDied","Data":"6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0"} Jan 24 08:42:02 crc kubenswrapper[4705]: I0124 08:42:02.079860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9zj" event={"ID":"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426","Type":"ContainerStarted","Data":"0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5"} Jan 24 08:42:02 crc kubenswrapper[4705]: I0124 08:42:02.101503 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fg9zj" podStartSLOduration=2.55619732 podStartE2EDuration="5.101485301s" podCreationTimestamp="2026-01-24 08:41:57 +0000 UTC" firstStartedPulling="2026-01-24 08:41:59.041269761 +0000 UTC m=+3657.761143049" lastFinishedPulling="2026-01-24 08:42:01.586557742 +0000 UTC m=+3660.306431030" observedRunningTime="2026-01-24 08:42:02.094662437 +0000 UTC m=+3660.814535725" watchObservedRunningTime="2026-01-24 08:42:02.101485301 +0000 UTC m=+3660.821358589" Jan 24 08:42:07 crc kubenswrapper[4705]: I0124 08:42:07.071804 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:42:07 crc kubenswrapper[4705]: I0124 08:42:07.073625 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:42:07 crc kubenswrapper[4705]: I0124 08:42:07.919629 4705 scope.go:117] "RemoveContainer" containerID="932ae09537e4e71e5eff156a5a9cd0cd111ae5ecc25a095da0c8c036c5f3f81e" Jan 24 08:42:07 crc kubenswrapper[4705]: I0124 08:42:07.955903 4705 scope.go:117] "RemoveContainer" containerID="9258e61afd92f3f48a68e133897e467187095c596c6e553ee660960af00039d6" Jan 24 08:42:07 crc kubenswrapper[4705]: I0124 08:42:07.998349 4705 scope.go:117] "RemoveContainer" containerID="7b86443d17ce16776890e3684980080d50ab924b6741340a130610f8f137f3ec" Jan 24 08:42:08 crc kubenswrapper[4705]: I0124 08:42:08.058763 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:42:08 crc kubenswrapper[4705]: I0124 08:42:08.058805 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:42:08 crc kubenswrapper[4705]: I0124 08:42:08.110298 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:42:08 crc kubenswrapper[4705]: I0124 08:42:08.210574 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:42:08 crc kubenswrapper[4705]: I0124 08:42:08.358299 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fg9zj"] Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.164181 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fg9zj" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="registry-server" containerID="cri-o://0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5" gracePeriod=2 Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.699310 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.827709 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2xf8\" (UniqueName: \"kubernetes.io/projected/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-kube-api-access-b2xf8\") pod \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.827840 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-catalog-content\") pod \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.827912 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-utilities\") pod \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\" (UID: \"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426\") " Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.828739 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-utilities" (OuterVolumeSpecName: "utilities") pod "5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" (UID: "5af7e6e4-024b-40e3-b1a9-ab9f5dab1426"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.836200 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-kube-api-access-b2xf8" (OuterVolumeSpecName: "kube-api-access-b2xf8") pod "5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" (UID: "5af7e6e4-024b-40e3-b1a9-ab9f5dab1426"). InnerVolumeSpecName "kube-api-access-b2xf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.895539 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" (UID: "5af7e6e4-024b-40e3-b1a9-ab9f5dab1426"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.930416 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2xf8\" (UniqueName: \"kubernetes.io/projected/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-kube-api-access-b2xf8\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.930453 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:10 crc kubenswrapper[4705]: I0124 08:42:10.930463 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.183211 4705 generic.go:334] "Generic (PLEG): container finished" podID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerID="0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5" exitCode=0 Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.183279 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9zj" event={"ID":"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426","Type":"ContainerDied","Data":"0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5"} Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.183308 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg9zj" event={"ID":"5af7e6e4-024b-40e3-b1a9-ab9f5dab1426","Type":"ContainerDied","Data":"d2b56e8ac3e61b91073c250e2e3ca83cc61d0fb3a08cdda8bc2e40397086e21f"} Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.183327 4705 scope.go:117] "RemoveContainer" containerID="0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.183405 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg9zj" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.276278 4705 scope.go:117] "RemoveContainer" containerID="6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.306923 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fg9zj"] Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.318254 4705 scope.go:117] "RemoveContainer" containerID="406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.319513 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fg9zj"] Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.368814 4705 scope.go:117] "RemoveContainer" containerID="0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5" Jan 24 08:42:11 crc kubenswrapper[4705]: E0124 08:42:11.375496 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5\": container with ID starting with 0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5 not found: ID does not exist" containerID="0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.375558 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5"} err="failed to get container status \"0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5\": rpc error: code = NotFound desc = could not find container \"0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5\": container with ID starting with 0c993ab5e96a7d805f7a693e249b7496b5fae96df79ce183e1b38c9da9ea57c5 not found: ID does not exist" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.375587 4705 scope.go:117] "RemoveContainer" containerID="6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0" Jan 24 08:42:11 crc kubenswrapper[4705]: E0124 08:42:11.376078 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0\": container with ID starting with 6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0 not found: ID does not exist" containerID="6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.376148 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0"} err="failed to get container status \"6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0\": rpc error: code = NotFound desc = could not find container \"6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0\": container with ID starting with 6260bcb0c3af0f3da6c5ec7abe984586a002e85925a54d7b6874027fdb2d92f0 not found: ID does not exist" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.376192 4705 scope.go:117] "RemoveContainer" containerID="406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3" Jan 24 08:42:11 crc kubenswrapper[4705]: E0124 08:42:11.376766 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3\": container with ID starting with 406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3 not found: ID does not exist" containerID="406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.376803 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3"} err="failed to get container status \"406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3\": rpc error: code = NotFound desc = could not find container \"406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3\": container with ID starting with 406044903fd37726af35d22b247effa4b071d84695c93e1c8dcb6b84bddf88d3 not found: ID does not exist" Jan 24 08:42:11 crc kubenswrapper[4705]: I0124 08:42:11.587606 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" path="/var/lib/kubelet/pods/5af7e6e4-024b-40e3-b1a9-ab9f5dab1426/volumes" Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.165399 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.166118 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.166232 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.167504 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.167593 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" gracePeriod=600 Jan 24 08:42:37 crc kubenswrapper[4705]: E0124 08:42:37.325174 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:42:37 crc kubenswrapper[4705]: E0124 08:42:37.360325 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7b3b969_5164_4f10_8758_72b7e2f4b762.slice/crio-1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73.scope\": RecentStats: unable to find data in memory cache]" Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.797649 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" exitCode=0 Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.797697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73"} Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.797998 4705 scope.go:117] "RemoveContainer" containerID="3fa3be8e6420f4da812f5a765ecc9e11eb0505d6fe3b7bf40a938f2ed494500a" Jan 24 08:42:37 crc kubenswrapper[4705]: I0124 08:42:37.798616 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:42:37 crc kubenswrapper[4705]: E0124 08:42:37.799006 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:42:40 crc kubenswrapper[4705]: I0124 08:42:40.718677 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.165890 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.166437 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-api" containerID="cri-o://57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8" gracePeriod=30 Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.166982 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-evaluator" containerID="cri-o://168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae" gracePeriod=30 Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.167153 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-notifier" containerID="cri-o://5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70" gracePeriod=30 Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.166884 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-listener" containerID="cri-o://25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0" gracePeriod=30 Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.852275 4705 generic.go:334] "Generic (PLEG): container finished" podID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerID="168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae" exitCode=0 Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.852617 4705 generic.go:334] "Generic (PLEG): container finished" podID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerID="57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8" exitCode=0 Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.852671 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerDied","Data":"168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae"} Jan 24 08:42:42 crc kubenswrapper[4705]: I0124 08:42:42.852714 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerDied","Data":"57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8"} Jan 24 08:42:43 crc kubenswrapper[4705]: I0124 08:42:43.863005 4705 generic.go:334] "Generic (PLEG): container finished" podID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerID="5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70" exitCode=0 Jan 24 08:42:43 crc kubenswrapper[4705]: I0124 08:42:43.863115 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerDied","Data":"5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70"} Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.636176 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.796998 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-scripts\") pod \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.797043 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-public-tls-certs\") pod \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.797143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-internal-tls-certs\") pod \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.797259 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5p7t\" (UniqueName: \"kubernetes.io/projected/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-kube-api-access-b5p7t\") pod \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.797329 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-combined-ca-bundle\") pod \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.797377 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-config-data\") pod \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\" (UID: \"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04\") " Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.803500 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-kube-api-access-b5p7t" (OuterVolumeSpecName: "kube-api-access-b5p7t") pod "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" (UID: "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04"). InnerVolumeSpecName "kube-api-access-b5p7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.806242 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-scripts" (OuterVolumeSpecName: "scripts") pod "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" (UID: "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.866259 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" (UID: "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.883678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" (UID: "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.885073 4705 generic.go:334] "Generic (PLEG): container finished" podID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerID="25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0" exitCode=0 Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.885124 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerDied","Data":"25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0"} Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.885150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04","Type":"ContainerDied","Data":"035e3e365b86e4d6a678017e4acbe7b26c052b8d201c4a809317acc34a6c029a"} Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.885166 4705 scope.go:117] "RemoveContainer" containerID="25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.885314 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.902611 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.902651 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.902663 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.902676 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5p7t\" (UniqueName: \"kubernetes.io/projected/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-kube-api-access-b5p7t\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.925937 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" (UID: "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.937218 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-config-data" (OuterVolumeSpecName: "config-data") pod "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" (UID: "eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.965527 4705 scope.go:117] "RemoveContainer" containerID="5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70" Jan 24 08:42:44 crc kubenswrapper[4705]: I0124 08:42:44.987390 4705 scope.go:117] "RemoveContainer" containerID="168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.004717 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.004744 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.036132 4705 scope.go:117] "RemoveContainer" containerID="57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.057459 4705 scope.go:117] "RemoveContainer" containerID="25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.058003 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0\": container with ID starting with 25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0 not found: ID does not exist" containerID="25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.058045 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0"} err="failed to get container status \"25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0\": rpc error: code = NotFound desc = could not find container \"25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0\": container with ID starting with 25f4347cea018898acecd256034ad4b9fbce76c75e905a64a89297809d169fe0 not found: ID does not exist" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.058078 4705 scope.go:117] "RemoveContainer" containerID="5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.058406 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70\": container with ID starting with 5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70 not found: ID does not exist" containerID="5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.058440 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70"} err="failed to get container status \"5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70\": rpc error: code = NotFound desc = could not find container \"5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70\": container with ID starting with 5d427c6aa89e1c1f4c26cc80d909872a4b6fbea9a2e367634bc6194fe50b3e70 not found: ID does not exist" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.058458 4705 scope.go:117] "RemoveContainer" containerID="168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.058850 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae\": container with ID starting with 168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae not found: ID does not exist" containerID="168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.058870 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae"} err="failed to get container status \"168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae\": rpc error: code = NotFound desc = could not find container \"168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae\": container with ID starting with 168962d43f9d97d12230e2d04afa24e5701a3a3319e5e4c9c74f4046de8b84ae not found: ID does not exist" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.058893 4705 scope.go:117] "RemoveContainer" containerID="57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.059164 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8\": container with ID starting with 57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8 not found: ID does not exist" containerID="57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.059187 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8"} err="failed to get container status \"57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8\": rpc error: code = NotFound desc = could not find container \"57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8\": container with ID starting with 57f18f4816a46c60510af73605708355004b9c25a0b319dde5f9deb6404582d8 not found: ID does not exist" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.225105 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.235487 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.257591 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.258230 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-evaluator" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258253 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-evaluator" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.258269 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-listener" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258276 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-listener" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.258297 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="registry-server" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258303 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="registry-server" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.258329 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-api" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258335 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-api" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.258346 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="extract-content" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258351 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="extract-content" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.258363 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="extract-utilities" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258369 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="extract-utilities" Jan 24 08:42:45 crc kubenswrapper[4705]: E0124 08:42:45.258383 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-notifier" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258389 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-notifier" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258621 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-notifier" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258640 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-evaluator" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258658 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-api" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258668 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af7e6e4-024b-40e3-b1a9-ab9f5dab1426" containerName="registry-server" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.258683 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" containerName="aodh-listener" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.260623 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.271166 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.271443 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-q9ztl" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.271861 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.275372 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.276358 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.281735 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.545043 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-internal-tls-certs\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.545248 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-config-data\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.545371 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-public-tls-certs\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.545769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-combined-ca-bundle\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.555542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-scripts\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.555667 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtjh\" (UniqueName: \"kubernetes.io/projected/95e14985-fae8-4e28-91fe-4234d31f3f33-kube-api-access-fhtjh\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.595101 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04" path="/var/lib/kubelet/pods/eb1b0eb4-2f16-45ba-bf2b-ae8a76cd8f04/volumes" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.657539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-combined-ca-bundle\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.657637 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-scripts\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.657658 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtjh\" (UniqueName: \"kubernetes.io/projected/95e14985-fae8-4e28-91fe-4234d31f3f33-kube-api-access-fhtjh\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.657773 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-internal-tls-certs\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.657846 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-config-data\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.657897 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-public-tls-certs\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.662767 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-public-tls-certs\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.663055 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-scripts\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.663271 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-internal-tls-certs\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.663897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-combined-ca-bundle\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.664312 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e14985-fae8-4e28-91fe-4234d31f3f33-config-data\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.678842 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtjh\" (UniqueName: \"kubernetes.io/projected/95e14985-fae8-4e28-91fe-4234d31f3f33-kube-api-access-fhtjh\") pod \"aodh-0\" (UID: \"95e14985-fae8-4e28-91fe-4234d31f3f33\") " pod="openstack/aodh-0" Jan 24 08:42:45 crc kubenswrapper[4705]: I0124 08:42:45.925342 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 24 08:42:46 crc kubenswrapper[4705]: I0124 08:42:46.421107 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 24 08:42:46 crc kubenswrapper[4705]: I0124 08:42:46.906059 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"95e14985-fae8-4e28-91fe-4234d31f3f33","Type":"ContainerStarted","Data":"d5990fcf15928b989ace4624a73d204493307c1f75ea1c1451f489234e5c411d"} Jan 24 08:42:48 crc kubenswrapper[4705]: I0124 08:42:48.044246 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"95e14985-fae8-4e28-91fe-4234d31f3f33","Type":"ContainerStarted","Data":"6c146641d1d214807d25490f796f6775dd27a0c070cbc4e33a8515b403553509"} Jan 24 08:42:49 crc kubenswrapper[4705]: I0124 08:42:49.054539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"95e14985-fae8-4e28-91fe-4234d31f3f33","Type":"ContainerStarted","Data":"7423b59841997ac444a4711945158a09f92d69db059ea8e161af007d2eac018f"} Jan 24 08:42:50 crc kubenswrapper[4705]: I0124 08:42:50.069928 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"95e14985-fae8-4e28-91fe-4234d31f3f33","Type":"ContainerStarted","Data":"a48235824078fb2c0f17c4b936628d264a5fc22ff153b75fbec4781ca43ba947"} Jan 24 08:42:50 crc kubenswrapper[4705]: I0124 08:42:50.070518 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"95e14985-fae8-4e28-91fe-4234d31f3f33","Type":"ContainerStarted","Data":"8d27d272d120dd37ef98e5b968ae9fde4fa3e179da52fef75f5e655e90ca76bd"} Jan 24 08:42:50 crc kubenswrapper[4705]: I0124 08:42:50.105691 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.768287773 podStartE2EDuration="5.105641263s" podCreationTimestamp="2026-01-24 08:42:45 +0000 UTC" firstStartedPulling="2026-01-24 08:42:46.420669764 +0000 UTC m=+3705.140543052" lastFinishedPulling="2026-01-24 08:42:49.758023254 +0000 UTC m=+3708.477896542" observedRunningTime="2026-01-24 08:42:50.091116491 +0000 UTC m=+3708.810989779" watchObservedRunningTime="2026-01-24 08:42:50.105641263 +0000 UTC m=+3708.825514551" Jan 24 08:42:50 crc kubenswrapper[4705]: I0124 08:42:50.576883 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:42:50 crc kubenswrapper[4705]: E0124 08:42:50.577262 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:43:02 crc kubenswrapper[4705]: I0124 08:43:02.575425 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:43:02 crc kubenswrapper[4705]: E0124 08:43:02.576183 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:43:06 crc kubenswrapper[4705]: I0124 08:43:06.063289 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-x52km"] Jan 24 08:43:06 crc kubenswrapper[4705]: I0124 08:43:06.081239 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-a922-account-create-update-pnhgf"] Jan 24 08:43:06 crc kubenswrapper[4705]: I0124 08:43:06.092749 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-a922-account-create-update-pnhgf"] Jan 24 08:43:06 crc kubenswrapper[4705]: I0124 08:43:06.103472 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-x52km"] Jan 24 08:43:07 crc kubenswrapper[4705]: I0124 08:43:07.587005 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c82ed6b-9321-49fa-a79c-76c390cb0d50" path="/var/lib/kubelet/pods/0c82ed6b-9321-49fa-a79c-76c390cb0d50/volumes" Jan 24 08:43:07 crc kubenswrapper[4705]: I0124 08:43:07.588323 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f498e1a-b565-40e1-96ae-6af81995e5d9" path="/var/lib/kubelet/pods/8f498e1a-b565-40e1-96ae-6af81995e5d9/volumes" Jan 24 08:43:08 crc kubenswrapper[4705]: I0124 08:43:08.088926 4705 scope.go:117] "RemoveContainer" containerID="09d4bd8406e40165cdf7ac442424b56ae5df0272859d547eb92aa331df81d488" Jan 24 08:43:08 crc kubenswrapper[4705]: I0124 08:43:08.130631 4705 scope.go:117] "RemoveContainer" containerID="a1c006f9b183a3d169931b64f59da04d839165f1f12c0eacf5cba61ecd020c61" Jan 24 08:43:13 crc kubenswrapper[4705]: I0124 08:43:13.576937 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:43:13 crc kubenswrapper[4705]: E0124 08:43:13.577703 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:43:17 crc kubenswrapper[4705]: I0124 08:43:17.044881 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-h2vff"] Jan 24 08:43:17 crc kubenswrapper[4705]: I0124 08:43:17.056085 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-h2vff"] Jan 24 08:43:17 crc kubenswrapper[4705]: I0124 08:43:17.589997 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d5b7c8-025c-46dc-8be2-7fec273bcde4" path="/var/lib/kubelet/pods/09d5b7c8-025c-46dc-8be2-7fec273bcde4/volumes" Jan 24 08:43:27 crc kubenswrapper[4705]: I0124 08:43:27.575833 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:43:27 crc kubenswrapper[4705]: E0124 08:43:27.576541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:43:38 crc kubenswrapper[4705]: I0124 08:43:38.576014 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:43:38 crc kubenswrapper[4705]: E0124 08:43:38.576806 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:43:50 crc kubenswrapper[4705]: I0124 08:43:50.576520 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:43:50 crc kubenswrapper[4705]: E0124 08:43:50.577422 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:44:02 crc kubenswrapper[4705]: I0124 08:44:02.575518 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:44:02 crc kubenswrapper[4705]: E0124 08:44:02.576297 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:44:08 crc kubenswrapper[4705]: I0124 08:44:08.253444 4705 scope.go:117] "RemoveContainer" containerID="a53200a50a7736aa011ce96c811f6f82c6b1d46f6712a6edbc485c2952a4da22" Jan 24 08:44:08 crc kubenswrapper[4705]: I0124 08:44:08.290067 4705 scope.go:117] "RemoveContainer" containerID="41d88174b5d16303151aa6a7c501a900153e8ee1826bcea5961edeb1fdbff9f3" Jan 24 08:44:14 crc kubenswrapper[4705]: I0124 08:44:14.576336 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:44:14 crc kubenswrapper[4705]: E0124 08:44:14.577140 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:44:25 crc kubenswrapper[4705]: I0124 08:44:25.576249 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:44:25 crc kubenswrapper[4705]: E0124 08:44:25.576891 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:44:37 crc kubenswrapper[4705]: I0124 08:44:37.647314 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:44:37 crc kubenswrapper[4705]: E0124 08:44:37.657551 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:44:42 crc kubenswrapper[4705]: I0124 08:44:42.766080 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.324583 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm8m"] Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.327859 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.349377 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm8m"] Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.490105 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-utilities\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.490248 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmjl\" (UniqueName: \"kubernetes.io/projected/204d5f18-203c-4895-8b2a-89bb60e85b2c-kube-api-access-6zmjl\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.490681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-catalog-content\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.591587 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-catalog-content\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.591988 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-utilities\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.592080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zmjl\" (UniqueName: \"kubernetes.io/projected/204d5f18-203c-4895-8b2a-89bb60e85b2c-kube-api-access-6zmjl\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.592163 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-catalog-content\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.592481 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-utilities\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.611157 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zmjl\" (UniqueName: \"kubernetes.io/projected/204d5f18-203c-4895-8b2a-89bb60e85b2c-kube-api-access-6zmjl\") pod \"redhat-marketplace-dtm8m\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:43 crc kubenswrapper[4705]: I0124 08:44:43.651702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:44 crc kubenswrapper[4705]: I0124 08:44:44.322530 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm8m"] Jan 24 08:44:44 crc kubenswrapper[4705]: I0124 08:44:44.765551 4705 generic.go:334] "Generic (PLEG): container finished" podID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerID="f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e" exitCode=0 Jan 24 08:44:44 crc kubenswrapper[4705]: I0124 08:44:44.765590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm8m" event={"ID":"204d5f18-203c-4895-8b2a-89bb60e85b2c","Type":"ContainerDied","Data":"f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e"} Jan 24 08:44:44 crc kubenswrapper[4705]: I0124 08:44:44.765815 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm8m" event={"ID":"204d5f18-203c-4895-8b2a-89bb60e85b2c","Type":"ContainerStarted","Data":"c2c10c9f8ff2726b6120bc338185a0b953e88788ee042485b721a2e439ac2741"} Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.501877 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.502793 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="prometheus" containerID="cri-o://813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56" gracePeriod=600 Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.502866 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="config-reloader" containerID="cri-o://c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f" gracePeriod=600 Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.502919 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="thanos-sidecar" containerID="cri-o://51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9" gracePeriod=600 Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.856139 4705 generic.go:334] "Generic (PLEG): container finished" podID="216859b8-4276-48fb-8695-6f78f29561b1" containerID="51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9" exitCode=0 Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.856171 4705 generic.go:334] "Generic (PLEG): container finished" podID="216859b8-4276-48fb-8695-6f78f29561b1" containerID="813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56" exitCode=0 Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.856191 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerDied","Data":"51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9"} Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.856235 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerDied","Data":"813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56"} Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.862812 4705 generic.go:334] "Generic (PLEG): container finished" podID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerID="0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca" exitCode=0 Jan 24 08:44:46 crc kubenswrapper[4705]: I0124 08:44:46.862874 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm8m" event={"ID":"204d5f18-203c-4895-8b2a-89bb60e85b2c","Type":"ContainerDied","Data":"0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca"} Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.563318 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692396 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-config-out\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692457 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692515 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-2\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692543 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-secret-combined-ca-bundle\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692574 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-0\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692641 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-tls-assets\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-thanos-prometheus-http-client-file\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692690 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhmt2\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-kube-api-access-zhmt2\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692719 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692754 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-1\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692796 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-config\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.692859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-db\") pod \"216859b8-4276-48fb-8695-6f78f29561b1\" (UID: \"216859b8-4276-48fb-8695-6f78f29561b1\") " Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.697282 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.697322 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.705194 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-db" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "prometheus-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.706393 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-config-out" (OuterVolumeSpecName: "config-out") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.706923 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.708443 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.710235 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-kube-api-access-zhmt2" (OuterVolumeSpecName: "kube-api-access-zhmt2") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "kube-api-access-zhmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.710995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.722998 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.723027 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.724013 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-config" (OuterVolumeSpecName: "config") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.724135 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794530 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794640 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-db\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794651 4705 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/216859b8-4276-48fb-8695-6f78f29561b1-config-out\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794661 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794673 4705 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794697 4705 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794710 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794721 4705 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794731 4705 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794743 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhmt2\" (UniqueName: \"kubernetes.io/projected/216859b8-4276-48fb-8695-6f78f29561b1-kube-api-access-zhmt2\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794752 4705 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.794762 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/216859b8-4276-48fb-8695-6f78f29561b1-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.849372 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config" (OuterVolumeSpecName: "web-config") pod "216859b8-4276-48fb-8695-6f78f29561b1" (UID: "216859b8-4276-48fb-8695-6f78f29561b1"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.877269 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm8m" event={"ID":"204d5f18-203c-4895-8b2a-89bb60e85b2c","Type":"ContainerStarted","Data":"cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f"} Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.884034 4705 generic.go:334] "Generic (PLEG): container finished" podID="216859b8-4276-48fb-8695-6f78f29561b1" containerID="c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f" exitCode=0 Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.884100 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerDied","Data":"c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f"} Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.884132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"216859b8-4276-48fb-8695-6f78f29561b1","Type":"ContainerDied","Data":"83f82212becd1d2a52d744b7e863d3824a3a3a66a09eebfa141dc42c6fd71d9e"} Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.884176 4705 scope.go:117] "RemoveContainer" containerID="51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.884419 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.900435 4705 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/216859b8-4276-48fb-8695-6f78f29561b1-web-config\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.904673 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dtm8m" podStartSLOduration=2.349447317 podStartE2EDuration="4.904652005s" podCreationTimestamp="2026-01-24 08:44:43 +0000 UTC" firstStartedPulling="2026-01-24 08:44:44.767491422 +0000 UTC m=+3823.487364710" lastFinishedPulling="2026-01-24 08:44:47.32269611 +0000 UTC m=+3826.042569398" observedRunningTime="2026-01-24 08:44:47.898541762 +0000 UTC m=+3826.618415060" watchObservedRunningTime="2026-01-24 08:44:47.904652005 +0000 UTC m=+3826.624525293" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.928420 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.928549 4705 scope.go:117] "RemoveContainer" containerID="c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.938043 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.951218 4705 scope.go:117] "RemoveContainer" containerID="813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.972300 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:44:47 crc kubenswrapper[4705]: E0124 08:44:47.972843 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="thanos-sidecar" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.972864 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="thanos-sidecar" Jan 24 08:44:47 crc kubenswrapper[4705]: E0124 08:44:47.972883 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="prometheus" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.972893 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="prometheus" Jan 24 08:44:47 crc kubenswrapper[4705]: E0124 08:44:47.972907 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="config-reloader" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.972916 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="config-reloader" Jan 24 08:44:47 crc kubenswrapper[4705]: E0124 08:44:47.972930 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="init-config-reloader" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.972939 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="init-config-reloader" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.973164 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="prometheus" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.973179 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="thanos-sidecar" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.973197 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="216859b8-4276-48fb-8695-6f78f29561b1" containerName="config-reloader" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.975868 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.980882 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.985644 4705 scope.go:117] "RemoveContainer" containerID="3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.985911 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.986155 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.986397 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.986869 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-c9h62" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.987047 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.992291 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.992567 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 24 08:44:47 crc kubenswrapper[4705]: I0124 08:44:47.997615 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.002769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.003087 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.003256 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.003390 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.003505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.003629 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.003774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.003914 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.004015 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.004138 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.004282 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnc59\" (UniqueName: \"kubernetes.io/projected/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-kube-api-access-tnc59\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.004493 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.004604 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.032572 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.051114 4705 scope.go:117] "RemoveContainer" containerID="51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9" Jan 24 08:44:48 crc kubenswrapper[4705]: E0124 08:44:48.051668 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9\": container with ID starting with 51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9 not found: ID does not exist" containerID="51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.051738 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9"} err="failed to get container status \"51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9\": rpc error: code = NotFound desc = could not find container \"51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9\": container with ID starting with 51a1c389676e3f8b44f59aacd5d8fa52ccf84c4d74ee7a059a3149f9e3f405d9 not found: ID does not exist" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.051776 4705 scope.go:117] "RemoveContainer" containerID="c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f" Jan 24 08:44:48 crc kubenswrapper[4705]: E0124 08:44:48.052237 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f\": container with ID starting with c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f not found: ID does not exist" containerID="c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.052267 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f"} err="failed to get container status \"c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f\": rpc error: code = NotFound desc = could not find container \"c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f\": container with ID starting with c879207cfb63bf2503a173048cbe4a5b445d7e6ca33733263ca850b900ee019f not found: ID does not exist" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.052302 4705 scope.go:117] "RemoveContainer" containerID="813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56" Jan 24 08:44:48 crc kubenswrapper[4705]: E0124 08:44:48.053239 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56\": container with ID starting with 813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56 not found: ID does not exist" containerID="813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.053279 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56"} err="failed to get container status \"813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56\": rpc error: code = NotFound desc = could not find container \"813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56\": container with ID starting with 813bb273db2480ebffc771aa00c292d383cc3ecd7540b5a71649d2515fe4fc56 not found: ID does not exist" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.053310 4705 scope.go:117] "RemoveContainer" containerID="3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975" Jan 24 08:44:48 crc kubenswrapper[4705]: E0124 08:44:48.055558 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975\": container with ID starting with 3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975 not found: ID does not exist" containerID="3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.055598 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975"} err="failed to get container status \"3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975\": rpc error: code = NotFound desc = could not find container \"3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975\": container with ID starting with 3427d3bc8ba7d95bbf911e9e3e092ca2ba4e7e01e00d06d474eb933bac8dd975 not found: ID does not exist" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106380 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106413 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106443 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnc59\" (UniqueName: \"kubernetes.io/projected/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-kube-api-access-tnc59\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106520 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106555 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106586 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106613 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106665 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.106742 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.107898 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.108081 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.108553 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.108858 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.110867 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.111408 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.112146 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.115268 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-config\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.116378 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.116668 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.116670 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.116871 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.124734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnc59\" (UniqueName: \"kubernetes.io/projected/3aa939bc-2a0f-4610-a5c5-62043aa52bdf-kube-api-access-tnc59\") pod \"prometheus-metric-storage-0\" (UID: \"3aa939bc-2a0f-4610-a5c5-62043aa52bdf\") " pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:48 crc kubenswrapper[4705]: I0124 08:44:48.354663 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 24 08:44:49 crc kubenswrapper[4705]: I0124 08:44:49.279890 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 24 08:44:49 crc kubenswrapper[4705]: I0124 08:44:49.587595 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216859b8-4276-48fb-8695-6f78f29561b1" path="/var/lib/kubelet/pods/216859b8-4276-48fb-8695-6f78f29561b1/volumes" Jan 24 08:44:49 crc kubenswrapper[4705]: I0124 08:44:49.931360 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa939bc-2a0f-4610-a5c5-62043aa52bdf","Type":"ContainerStarted","Data":"f6cf9bf2aa65fc84d3181fefef586f9acb11ed8ddc6e2bf728b5078911eb0674"} Jan 24 08:44:52 crc kubenswrapper[4705]: I0124 08:44:52.575925 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:44:52 crc kubenswrapper[4705]: E0124 08:44:52.576688 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:44:53 crc kubenswrapper[4705]: I0124 08:44:53.653770 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:53 crc kubenswrapper[4705]: I0124 08:44:53.654479 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:53 crc kubenswrapper[4705]: I0124 08:44:53.703310 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:53 crc kubenswrapper[4705]: I0124 08:44:53.973568 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa939bc-2a0f-4610-a5c5-62043aa52bdf","Type":"ContainerStarted","Data":"adaa086da700f220d65970cca7632004c6897d01821448367bd5f04293afaf1f"} Jan 24 08:44:54 crc kubenswrapper[4705]: I0124 08:44:54.026505 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:54 crc kubenswrapper[4705]: I0124 08:44:54.074435 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm8m"] Jan 24 08:44:55 crc kubenswrapper[4705]: I0124 08:44:55.988194 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dtm8m" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="registry-server" containerID="cri-o://cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f" gracePeriod=2 Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.556716 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.705749 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-utilities\") pod \"204d5f18-203c-4895-8b2a-89bb60e85b2c\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.705815 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zmjl\" (UniqueName: \"kubernetes.io/projected/204d5f18-203c-4895-8b2a-89bb60e85b2c-kube-api-access-6zmjl\") pod \"204d5f18-203c-4895-8b2a-89bb60e85b2c\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.705930 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-catalog-content\") pod \"204d5f18-203c-4895-8b2a-89bb60e85b2c\" (UID: \"204d5f18-203c-4895-8b2a-89bb60e85b2c\") " Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.706985 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-utilities" (OuterVolumeSpecName: "utilities") pod "204d5f18-203c-4895-8b2a-89bb60e85b2c" (UID: "204d5f18-203c-4895-8b2a-89bb60e85b2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.712155 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204d5f18-203c-4895-8b2a-89bb60e85b2c-kube-api-access-6zmjl" (OuterVolumeSpecName: "kube-api-access-6zmjl") pod "204d5f18-203c-4895-8b2a-89bb60e85b2c" (UID: "204d5f18-203c-4895-8b2a-89bb60e85b2c"). InnerVolumeSpecName "kube-api-access-6zmjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.730989 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "204d5f18-203c-4895-8b2a-89bb60e85b2c" (UID: "204d5f18-203c-4895-8b2a-89bb60e85b2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.808216 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.808274 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204d5f18-203c-4895-8b2a-89bb60e85b2c-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.808295 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zmjl\" (UniqueName: \"kubernetes.io/projected/204d5f18-203c-4895-8b2a-89bb60e85b2c-kube-api-access-6zmjl\") on node \"crc\" DevicePath \"\"" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.999177 4705 generic.go:334] "Generic (PLEG): container finished" podID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerID="cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f" exitCode=0 Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.999234 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm8m" event={"ID":"204d5f18-203c-4895-8b2a-89bb60e85b2c","Type":"ContainerDied","Data":"cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f"} Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.999285 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm8m" event={"ID":"204d5f18-203c-4895-8b2a-89bb60e85b2c","Type":"ContainerDied","Data":"c2c10c9f8ff2726b6120bc338185a0b953e88788ee042485b721a2e439ac2741"} Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.999305 4705 scope.go:117] "RemoveContainer" containerID="cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f" Jan 24 08:44:56 crc kubenswrapper[4705]: I0124 08:44:56.999303 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtm8m" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.028781 4705 scope.go:117] "RemoveContainer" containerID="0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.029794 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm8m"] Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.052666 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm8m"] Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.058075 4705 scope.go:117] "RemoveContainer" containerID="f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.104993 4705 scope.go:117] "RemoveContainer" containerID="cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f" Jan 24 08:44:57 crc kubenswrapper[4705]: E0124 08:44:57.106273 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f\": container with ID starting with cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f not found: ID does not exist" containerID="cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.106342 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f"} err="failed to get container status \"cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f\": rpc error: code = NotFound desc = could not find container \"cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f\": container with ID starting with cafc82556629e9de172e683694c2834b1173aa5c71eb9b5f153c8e61f07f6d6f not found: ID does not exist" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.106368 4705 scope.go:117] "RemoveContainer" containerID="0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca" Jan 24 08:44:57 crc kubenswrapper[4705]: E0124 08:44:57.111905 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca\": container with ID starting with 0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca not found: ID does not exist" containerID="0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.111945 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca"} err="failed to get container status \"0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca\": rpc error: code = NotFound desc = could not find container \"0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca\": container with ID starting with 0f786c5deba1422405c8d6764a6270acef70106ae829912406038925a8925cca not found: ID does not exist" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.111972 4705 scope.go:117] "RemoveContainer" containerID="f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e" Jan 24 08:44:57 crc kubenswrapper[4705]: E0124 08:44:57.115316 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e\": container with ID starting with f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e not found: ID does not exist" containerID="f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.115361 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e"} err="failed to get container status \"f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e\": rpc error: code = NotFound desc = could not find container \"f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e\": container with ID starting with f0f10d2ea06cd98e94f28db1e42cd7c83adedc88d037b49d6b876d0bfa3e225e not found: ID does not exist" Jan 24 08:44:57 crc kubenswrapper[4705]: I0124 08:44:57.590486 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" path="/var/lib/kubelet/pods/204d5f18-203c-4895-8b2a-89bb60e85b2c/volumes" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.174630 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578"] Jan 24 08:45:00 crc kubenswrapper[4705]: E0124 08:45:00.175298 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="registry-server" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.175318 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="registry-server" Jan 24 08:45:00 crc kubenswrapper[4705]: E0124 08:45:00.175344 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="extract-utilities" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.175352 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="extract-utilities" Jan 24 08:45:00 crc kubenswrapper[4705]: E0124 08:45:00.175376 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="extract-content" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.175383 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="extract-content" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.175647 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="204d5f18-203c-4895-8b2a-89bb60e85b2c" containerName="registry-server" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.176520 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.178660 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.180009 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.188545 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578"] Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.328347 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfa4b73a-aff2-43b4-91da-82ac443ac644-secret-volume\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.329130 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfa4b73a-aff2-43b4-91da-82ac443ac644-config-volume\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.329278 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88xp\" (UniqueName: \"kubernetes.io/projected/bfa4b73a-aff2-43b4-91da-82ac443ac644-kube-api-access-f88xp\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.431543 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfa4b73a-aff2-43b4-91da-82ac443ac644-config-volume\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.431651 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88xp\" (UniqueName: \"kubernetes.io/projected/bfa4b73a-aff2-43b4-91da-82ac443ac644-kube-api-access-f88xp\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.431802 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfa4b73a-aff2-43b4-91da-82ac443ac644-secret-volume\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.432521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfa4b73a-aff2-43b4-91da-82ac443ac644-config-volume\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.445728 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfa4b73a-aff2-43b4-91da-82ac443ac644-secret-volume\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.447945 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88xp\" (UniqueName: \"kubernetes.io/projected/bfa4b73a-aff2-43b4-91da-82ac443ac644-kube-api-access-f88xp\") pod \"collect-profiles-29487405-7l578\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.499263 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:00 crc kubenswrapper[4705]: I0124 08:45:00.960921 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578"] Jan 24 08:45:01 crc kubenswrapper[4705]: I0124 08:45:01.041910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" event={"ID":"bfa4b73a-aff2-43b4-91da-82ac443ac644","Type":"ContainerStarted","Data":"fe932ddcfbdfb82ca5aff55dfc9869ca01cde611a604587cd3bd8d4c3d3f09d1"} Jan 24 08:45:02 crc kubenswrapper[4705]: I0124 08:45:02.053300 4705 generic.go:334] "Generic (PLEG): container finished" podID="3aa939bc-2a0f-4610-a5c5-62043aa52bdf" containerID="adaa086da700f220d65970cca7632004c6897d01821448367bd5f04293afaf1f" exitCode=0 Jan 24 08:45:02 crc kubenswrapper[4705]: I0124 08:45:02.053656 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa939bc-2a0f-4610-a5c5-62043aa52bdf","Type":"ContainerDied","Data":"adaa086da700f220d65970cca7632004c6897d01821448367bd5f04293afaf1f"} Jan 24 08:45:02 crc kubenswrapper[4705]: I0124 08:45:02.057947 4705 generic.go:334] "Generic (PLEG): container finished" podID="bfa4b73a-aff2-43b4-91da-82ac443ac644" containerID="e837313ec21abdc49aada70b06847726431e04b47dd058adc68fa636929f5656" exitCode=0 Jan 24 08:45:02 crc kubenswrapper[4705]: I0124 08:45:02.058030 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" event={"ID":"bfa4b73a-aff2-43b4-91da-82ac443ac644","Type":"ContainerDied","Data":"e837313ec21abdc49aada70b06847726431e04b47dd058adc68fa636929f5656"} Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.068497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa939bc-2a0f-4610-a5c5-62043aa52bdf","Type":"ContainerStarted","Data":"9bc8ee4ad1fc16307c92c23776e602dc8ff15609828d092532236358a456dd72"} Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.373381 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.392590 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfa4b73a-aff2-43b4-91da-82ac443ac644-config-volume\") pod \"bfa4b73a-aff2-43b4-91da-82ac443ac644\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.392878 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f88xp\" (UniqueName: \"kubernetes.io/projected/bfa4b73a-aff2-43b4-91da-82ac443ac644-kube-api-access-f88xp\") pod \"bfa4b73a-aff2-43b4-91da-82ac443ac644\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.392954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfa4b73a-aff2-43b4-91da-82ac443ac644-secret-volume\") pod \"bfa4b73a-aff2-43b4-91da-82ac443ac644\" (UID: \"bfa4b73a-aff2-43b4-91da-82ac443ac644\") " Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.395022 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa4b73a-aff2-43b4-91da-82ac443ac644-config-volume" (OuterVolumeSpecName: "config-volume") pod "bfa4b73a-aff2-43b4-91da-82ac443ac644" (UID: "bfa4b73a-aff2-43b4-91da-82ac443ac644"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.400310 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa4b73a-aff2-43b4-91da-82ac443ac644-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bfa4b73a-aff2-43b4-91da-82ac443ac644" (UID: "bfa4b73a-aff2-43b4-91da-82ac443ac644"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.409077 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa4b73a-aff2-43b4-91da-82ac443ac644-kube-api-access-f88xp" (OuterVolumeSpecName: "kube-api-access-f88xp") pod "bfa4b73a-aff2-43b4-91da-82ac443ac644" (UID: "bfa4b73a-aff2-43b4-91da-82ac443ac644"). InnerVolumeSpecName "kube-api-access-f88xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.495809 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f88xp\" (UniqueName: \"kubernetes.io/projected/bfa4b73a-aff2-43b4-91da-82ac443ac644-kube-api-access-f88xp\") on node \"crc\" DevicePath \"\"" Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.495876 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfa4b73a-aff2-43b4-91da-82ac443ac644-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:45:03 crc kubenswrapper[4705]: I0124 08:45:03.495886 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfa4b73a-aff2-43b4-91da-82ac443ac644-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:45:04 crc kubenswrapper[4705]: I0124 08:45:04.081677 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" event={"ID":"bfa4b73a-aff2-43b4-91da-82ac443ac644","Type":"ContainerDied","Data":"fe932ddcfbdfb82ca5aff55dfc9869ca01cde611a604587cd3bd8d4c3d3f09d1"} Jan 24 08:45:04 crc kubenswrapper[4705]: I0124 08:45:04.081733 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe932ddcfbdfb82ca5aff55dfc9869ca01cde611a604587cd3bd8d4c3d3f09d1" Jan 24 08:45:04 crc kubenswrapper[4705]: I0124 08:45:04.081745 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487405-7l578" Jan 24 08:45:04 crc kubenswrapper[4705]: I0124 08:45:04.445976 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r"] Jan 24 08:45:04 crc kubenswrapper[4705]: I0124 08:45:04.456917 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-vbc6r"] Jan 24 08:45:05 crc kubenswrapper[4705]: I0124 08:45:05.575921 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:45:05 crc kubenswrapper[4705]: E0124 08:45:05.576262 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:45:05 crc kubenswrapper[4705]: I0124 08:45:05.594057 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a0b314-0224-465a-9302-2ec3f4cdaf02" path="/var/lib/kubelet/pods/27a0b314-0224-465a-9302-2ec3f4cdaf02/volumes" Jan 24 08:45:06 crc kubenswrapper[4705]: I0124 08:45:06.105122 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa939bc-2a0f-4610-a5c5-62043aa52bdf","Type":"ContainerStarted","Data":"52372b25668af7c456caa7311e1ccb9d0bff53fc4423b63619d15156e17e57da"} Jan 24 08:45:06 crc kubenswrapper[4705]: I0124 08:45:06.105459 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3aa939bc-2a0f-4610-a5c5-62043aa52bdf","Type":"ContainerStarted","Data":"5ee269199bb1316cfdd57ab710c5a4f7886420949ed892c0967403ea88729bea"} Jan 24 08:45:06 crc kubenswrapper[4705]: I0124 08:45:06.132897 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.132875584 podStartE2EDuration="19.132875584s" podCreationTimestamp="2026-01-24 08:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:45:06.129061515 +0000 UTC m=+3844.848934823" watchObservedRunningTime="2026-01-24 08:45:06.132875584 +0000 UTC m=+3844.852748892" Jan 24 08:45:08 crc kubenswrapper[4705]: I0124 08:45:08.355283 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 24 08:45:08 crc kubenswrapper[4705]: I0124 08:45:08.448148 4705 scope.go:117] "RemoveContainer" containerID="8956a89a11cb15cd366f1061e23eccfc9b2a4126db227cc9f8a46d0d87256add" Jan 24 08:45:08 crc kubenswrapper[4705]: I0124 08:45:08.474814 4705 scope.go:117] "RemoveContainer" containerID="2ab5dbed6d74d017b85212b627aaa5ef9d5748256e652e38979a4f004bb6407d" Jan 24 08:45:08 crc kubenswrapper[4705]: I0124 08:45:08.502920 4705 scope.go:117] "RemoveContainer" containerID="7509e55629a5479a8beb8376c5e43421699338a127532980bb6be6ffd5472aa1" Jan 24 08:45:08 crc kubenswrapper[4705]: I0124 08:45:08.548601 4705 scope.go:117] "RemoveContainer" containerID="f328dcbf47581001f5f33ec280ff071aec244c0ed0f98c4b805cb8f85db1588f" Jan 24 08:45:18 crc kubenswrapper[4705]: I0124 08:45:18.354963 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 24 08:45:18 crc kubenswrapper[4705]: I0124 08:45:18.360598 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 24 08:45:18 crc kubenswrapper[4705]: I0124 08:45:18.490382 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 24 08:45:18 crc kubenswrapper[4705]: I0124 08:45:18.578991 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:45:18 crc kubenswrapper[4705]: E0124 08:45:18.579318 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.163062 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7bbc"] Jan 24 08:45:19 crc kubenswrapper[4705]: E0124 08:45:19.163557 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa4b73a-aff2-43b4-91da-82ac443ac644" containerName="collect-profiles" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.163577 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa4b73a-aff2-43b4-91da-82ac443ac644" containerName="collect-profiles" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.163805 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa4b73a-aff2-43b4-91da-82ac443ac644" containerName="collect-profiles" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.165630 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.174226 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7bbc"] Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.314162 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7gg8\" (UniqueName: \"kubernetes.io/projected/348988c6-a964-4db6-b916-83a07a0aadc7-kube-api-access-n7gg8\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.314363 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-utilities\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.314477 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-catalog-content\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.416285 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7gg8\" (UniqueName: \"kubernetes.io/projected/348988c6-a964-4db6-b916-83a07a0aadc7-kube-api-access-n7gg8\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.416378 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-utilities\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.416441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-catalog-content\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.416974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-utilities\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.417085 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-catalog-content\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.438051 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7gg8\" (UniqueName: \"kubernetes.io/projected/348988c6-a964-4db6-b916-83a07a0aadc7-kube-api-access-n7gg8\") pod \"redhat-operators-q7bbc\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.490498 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:19 crc kubenswrapper[4705]: I0124 08:45:19.994426 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7bbc"] Jan 24 08:45:19 crc kubenswrapper[4705]: W0124 08:45:19.996689 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod348988c6_a964_4db6_b916_83a07a0aadc7.slice/crio-b8ae757be8aeef8847d6b76d2b5c778176caf31beede81ce6a9d2050fff8bdab WatchSource:0}: Error finding container b8ae757be8aeef8847d6b76d2b5c778176caf31beede81ce6a9d2050fff8bdab: Status 404 returned error can't find the container with id b8ae757be8aeef8847d6b76d2b5c778176caf31beede81ce6a9d2050fff8bdab Jan 24 08:45:20 crc kubenswrapper[4705]: I0124 08:45:20.457036 4705 generic.go:334] "Generic (PLEG): container finished" podID="348988c6-a964-4db6-b916-83a07a0aadc7" containerID="5a68836afcf4c1eb2b44fc0fe6f62523982535c0d883e9b11453f217e5e32406" exitCode=0 Jan 24 08:45:20 crc kubenswrapper[4705]: I0124 08:45:20.457183 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7bbc" event={"ID":"348988c6-a964-4db6-b916-83a07a0aadc7","Type":"ContainerDied","Data":"5a68836afcf4c1eb2b44fc0fe6f62523982535c0d883e9b11453f217e5e32406"} Jan 24 08:45:20 crc kubenswrapper[4705]: I0124 08:45:20.457323 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7bbc" event={"ID":"348988c6-a964-4db6-b916-83a07a0aadc7","Type":"ContainerStarted","Data":"b8ae757be8aeef8847d6b76d2b5c778176caf31beede81ce6a9d2050fff8bdab"} Jan 24 08:45:22 crc kubenswrapper[4705]: I0124 08:45:22.477162 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7bbc" event={"ID":"348988c6-a964-4db6-b916-83a07a0aadc7","Type":"ContainerStarted","Data":"9f113ccd5b721d5336f22134ab4a2dda4bb4200e04f4d128646a4721093f544c"} Jan 24 08:45:25 crc kubenswrapper[4705]: I0124 08:45:25.514657 4705 generic.go:334] "Generic (PLEG): container finished" podID="348988c6-a964-4db6-b916-83a07a0aadc7" containerID="9f113ccd5b721d5336f22134ab4a2dda4bb4200e04f4d128646a4721093f544c" exitCode=0 Jan 24 08:45:25 crc kubenswrapper[4705]: I0124 08:45:25.514707 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7bbc" event={"ID":"348988c6-a964-4db6-b916-83a07a0aadc7","Type":"ContainerDied","Data":"9f113ccd5b721d5336f22134ab4a2dda4bb4200e04f4d128646a4721093f544c"} Jan 24 08:45:26 crc kubenswrapper[4705]: I0124 08:45:26.532545 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7bbc" event={"ID":"348988c6-a964-4db6-b916-83a07a0aadc7","Type":"ContainerStarted","Data":"40d5e5a94fc5451b60f5d6e18c55ea592fa6f48a45a7dd690aed2d9410aee1e5"} Jan 24 08:45:26 crc kubenswrapper[4705]: I0124 08:45:26.566914 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7bbc" podStartSLOduration=2.089380983 podStartE2EDuration="7.566872219s" podCreationTimestamp="2026-01-24 08:45:19 +0000 UTC" firstStartedPulling="2026-01-24 08:45:20.460270931 +0000 UTC m=+3859.180144219" lastFinishedPulling="2026-01-24 08:45:25.937762167 +0000 UTC m=+3864.657635455" observedRunningTime="2026-01-24 08:45:26.550522055 +0000 UTC m=+3865.270395383" watchObservedRunningTime="2026-01-24 08:45:26.566872219 +0000 UTC m=+3865.286745547" Jan 24 08:45:29 crc kubenswrapper[4705]: I0124 08:45:29.490757 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:29 crc kubenswrapper[4705]: I0124 08:45:29.491284 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:30 crc kubenswrapper[4705]: I0124 08:45:30.540678 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7bbc" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="registry-server" probeResult="failure" output=< Jan 24 08:45:30 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Jan 24 08:45:30 crc kubenswrapper[4705]: > Jan 24 08:45:33 crc kubenswrapper[4705]: I0124 08:45:33.576567 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:45:33 crc kubenswrapper[4705]: E0124 08:45:33.577495 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:45:40 crc kubenswrapper[4705]: I0124 08:45:40.081279 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:40 crc kubenswrapper[4705]: I0124 08:45:40.130262 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:40 crc kubenswrapper[4705]: I0124 08:45:40.382156 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7bbc"] Jan 24 08:45:41 crc kubenswrapper[4705]: I0124 08:45:41.679230 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7bbc" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="registry-server" containerID="cri-o://40d5e5a94fc5451b60f5d6e18c55ea592fa6f48a45a7dd690aed2d9410aee1e5" gracePeriod=2 Jan 24 08:45:48 crc kubenswrapper[4705]: I0124 08:45:48.575583 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:45:48 crc kubenswrapper[4705]: E0124 08:45:48.576516 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:45:48 crc kubenswrapper[4705]: I0124 08:45:48.764288 4705 generic.go:334] "Generic (PLEG): container finished" podID="348988c6-a964-4db6-b916-83a07a0aadc7" containerID="40d5e5a94fc5451b60f5d6e18c55ea592fa6f48a45a7dd690aed2d9410aee1e5" exitCode=0 Jan 24 08:45:48 crc kubenswrapper[4705]: I0124 08:45:48.764457 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7bbc" event={"ID":"348988c6-a964-4db6-b916-83a07a0aadc7","Type":"ContainerDied","Data":"40d5e5a94fc5451b60f5d6e18c55ea592fa6f48a45a7dd690aed2d9410aee1e5"} Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.083133 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.269018 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-utilities\") pod \"348988c6-a964-4db6-b916-83a07a0aadc7\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.269607 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-catalog-content\") pod \"348988c6-a964-4db6-b916-83a07a0aadc7\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.269677 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7gg8\" (UniqueName: \"kubernetes.io/projected/348988c6-a964-4db6-b916-83a07a0aadc7-kube-api-access-n7gg8\") pod \"348988c6-a964-4db6-b916-83a07a0aadc7\" (UID: \"348988c6-a964-4db6-b916-83a07a0aadc7\") " Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.270077 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-utilities" (OuterVolumeSpecName: "utilities") pod "348988c6-a964-4db6-b916-83a07a0aadc7" (UID: "348988c6-a964-4db6-b916-83a07a0aadc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.270253 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.275261 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348988c6-a964-4db6-b916-83a07a0aadc7-kube-api-access-n7gg8" (OuterVolumeSpecName: "kube-api-access-n7gg8") pod "348988c6-a964-4db6-b916-83a07a0aadc7" (UID: "348988c6-a964-4db6-b916-83a07a0aadc7"). InnerVolumeSpecName "kube-api-access-n7gg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.372038 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7gg8\" (UniqueName: \"kubernetes.io/projected/348988c6-a964-4db6-b916-83a07a0aadc7-kube-api-access-n7gg8\") on node \"crc\" DevicePath \"\"" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.381355 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "348988c6-a964-4db6-b916-83a07a0aadc7" (UID: "348988c6-a964-4db6-b916-83a07a0aadc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.473697 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348988c6-a964-4db6-b916-83a07a0aadc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.775343 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7bbc" event={"ID":"348988c6-a964-4db6-b916-83a07a0aadc7","Type":"ContainerDied","Data":"b8ae757be8aeef8847d6b76d2b5c778176caf31beede81ce6a9d2050fff8bdab"} Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.775410 4705 scope.go:117] "RemoveContainer" containerID="40d5e5a94fc5451b60f5d6e18c55ea592fa6f48a45a7dd690aed2d9410aee1e5" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.775429 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7bbc" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.801330 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7bbc"] Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.803809 4705 scope.go:117] "RemoveContainer" containerID="9f113ccd5b721d5336f22134ab4a2dda4bb4200e04f4d128646a4721093f544c" Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.810767 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7bbc"] Jan 24 08:45:49 crc kubenswrapper[4705]: I0124 08:45:49.825067 4705 scope.go:117] "RemoveContainer" containerID="5a68836afcf4c1eb2b44fc0fe6f62523982535c0d883e9b11453f217e5e32406" Jan 24 08:45:51 crc kubenswrapper[4705]: I0124 08:45:51.586998 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" path="/var/lib/kubelet/pods/348988c6-a964-4db6-b916-83a07a0aadc7/volumes" Jan 24 08:46:01 crc kubenswrapper[4705]: I0124 08:46:01.584030 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:46:01 crc kubenswrapper[4705]: E0124 08:46:01.585672 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:46:16 crc kubenswrapper[4705]: I0124 08:46:16.576238 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:46:16 crc kubenswrapper[4705]: E0124 08:46:16.577392 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:46:31 crc kubenswrapper[4705]: I0124 08:46:31.583063 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:46:31 crc kubenswrapper[4705]: E0124 08:46:31.585543 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:46:46 crc kubenswrapper[4705]: I0124 08:46:46.176842 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:46:46 crc kubenswrapper[4705]: I0124 08:46:46.576312 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:46:46 crc kubenswrapper[4705]: E0124 08:46:46.576743 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:47:01 crc kubenswrapper[4705]: I0124 08:47:01.587280 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:47:01 crc kubenswrapper[4705]: E0124 08:47:01.588554 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.603356 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bsz5/must-gather-r2vbl"] Jan 24 08:47:04 crc kubenswrapper[4705]: E0124 08:47:04.604519 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="registry-server" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.604539 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="registry-server" Jan 24 08:47:04 crc kubenswrapper[4705]: E0124 08:47:04.604579 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="extract-utilities" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.604589 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="extract-utilities" Jan 24 08:47:04 crc kubenswrapper[4705]: E0124 08:47:04.604622 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="extract-content" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.604631 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="extract-content" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.608644 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="348988c6-a964-4db6-b916-83a07a0aadc7" containerName="registry-server" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.628679 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6bsz5/must-gather-r2vbl"] Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.628813 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.633487 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6bsz5"/"openshift-service-ca.crt" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.633525 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6bsz5"/"kube-root-ca.crt" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.634645 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6bsz5"/"default-dockercfg-khpfb" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.757218 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md64l\" (UniqueName: \"kubernetes.io/projected/f7655f65-4aae-4328-8800-400aca52aa98-kube-api-access-md64l\") pod \"must-gather-r2vbl\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.757372 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7655f65-4aae-4328-8800-400aca52aa98-must-gather-output\") pod \"must-gather-r2vbl\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.859078 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md64l\" (UniqueName: \"kubernetes.io/projected/f7655f65-4aae-4328-8800-400aca52aa98-kube-api-access-md64l\") pod \"must-gather-r2vbl\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.859168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7655f65-4aae-4328-8800-400aca52aa98-must-gather-output\") pod \"must-gather-r2vbl\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.859836 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7655f65-4aae-4328-8800-400aca52aa98-must-gather-output\") pod \"must-gather-r2vbl\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.878618 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md64l\" (UniqueName: \"kubernetes.io/projected/f7655f65-4aae-4328-8800-400aca52aa98-kube-api-access-md64l\") pod \"must-gather-r2vbl\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:04 crc kubenswrapper[4705]: I0124 08:47:04.955700 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:47:05 crc kubenswrapper[4705]: I0124 08:47:05.481635 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6bsz5/must-gather-r2vbl"] Jan 24 08:47:05 crc kubenswrapper[4705]: I0124 08:47:05.495165 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:47:05 crc kubenswrapper[4705]: I0124 08:47:05.732515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" event={"ID":"f7655f65-4aae-4328-8800-400aca52aa98","Type":"ContainerStarted","Data":"dd87725a1df0cbbc30d0df1e672577559340602614b1777fee95bd56f255e926"} Jan 24 08:47:12 crc kubenswrapper[4705]: I0124 08:47:12.575492 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:47:12 crc kubenswrapper[4705]: E0124 08:47:12.576315 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:47:12 crc kubenswrapper[4705]: I0124 08:47:12.806747 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" event={"ID":"f7655f65-4aae-4328-8800-400aca52aa98","Type":"ContainerStarted","Data":"92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac"} Jan 24 08:47:12 crc kubenswrapper[4705]: I0124 08:47:12.806811 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" event={"ID":"f7655f65-4aae-4328-8800-400aca52aa98","Type":"ContainerStarted","Data":"8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465"} Jan 24 08:47:12 crc kubenswrapper[4705]: I0124 08:47:12.836445 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" podStartSLOduration=2.320481693 podStartE2EDuration="8.836408266s" podCreationTimestamp="2026-01-24 08:47:04 +0000 UTC" firstStartedPulling="2026-01-24 08:47:05.494953009 +0000 UTC m=+3964.214826297" lastFinishedPulling="2026-01-24 08:47:12.010879582 +0000 UTC m=+3970.730752870" observedRunningTime="2026-01-24 08:47:12.82202076 +0000 UTC m=+3971.541894048" watchObservedRunningTime="2026-01-24 08:47:12.836408266 +0000 UTC m=+3971.556281564" Jan 24 08:47:16 crc kubenswrapper[4705]: I0124 08:47:16.906472 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bsz5/crc-debug-2mrmk"] Jan 24 08:47:16 crc kubenswrapper[4705]: I0124 08:47:16.908385 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:16 crc kubenswrapper[4705]: I0124 08:47:16.950575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-host\") pod \"crc-debug-2mrmk\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:16 crc kubenswrapper[4705]: I0124 08:47:16.951162 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrb7\" (UniqueName: \"kubernetes.io/projected/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-kube-api-access-jtrb7\") pod \"crc-debug-2mrmk\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:17 crc kubenswrapper[4705]: I0124 08:47:17.053929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrb7\" (UniqueName: \"kubernetes.io/projected/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-kube-api-access-jtrb7\") pod \"crc-debug-2mrmk\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:17 crc kubenswrapper[4705]: I0124 08:47:17.054003 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-host\") pod \"crc-debug-2mrmk\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:17 crc kubenswrapper[4705]: I0124 08:47:17.054202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-host\") pod \"crc-debug-2mrmk\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:17 crc kubenswrapper[4705]: I0124 08:47:17.091905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrb7\" (UniqueName: \"kubernetes.io/projected/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-kube-api-access-jtrb7\") pod \"crc-debug-2mrmk\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:17 crc kubenswrapper[4705]: I0124 08:47:17.225590 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:17 crc kubenswrapper[4705]: W0124 08:47:17.268251 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fa46d7d_f8b0_4797_9332_1e6d2c1b8e3c.slice/crio-12cf639f8daeb2d9a5a2cb52baa72628c9b6574882c1b1e18f636491c0ade481 WatchSource:0}: Error finding container 12cf639f8daeb2d9a5a2cb52baa72628c9b6574882c1b1e18f636491c0ade481: Status 404 returned error can't find the container with id 12cf639f8daeb2d9a5a2cb52baa72628c9b6574882c1b1e18f636491c0ade481 Jan 24 08:47:17 crc kubenswrapper[4705]: I0124 08:47:17.858655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" event={"ID":"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c","Type":"ContainerStarted","Data":"12cf639f8daeb2d9a5a2cb52baa72628c9b6574882c1b1e18f636491c0ade481"} Jan 24 08:47:27 crc kubenswrapper[4705]: I0124 08:47:27.577733 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:47:27 crc kubenswrapper[4705]: E0124 08:47:27.578756 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:47:30 crc kubenswrapper[4705]: I0124 08:47:30.005429 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" event={"ID":"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c","Type":"ContainerStarted","Data":"75265a1153d819ac1e7bddfdce313fd342664579bc7d0d5ec40dd6588a85457d"} Jan 24 08:47:30 crc kubenswrapper[4705]: I0124 08:47:30.024896 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" podStartSLOduration=2.190098016 podStartE2EDuration="14.024878999s" podCreationTimestamp="2026-01-24 08:47:16 +0000 UTC" firstStartedPulling="2026-01-24 08:47:17.271096837 +0000 UTC m=+3975.990970125" lastFinishedPulling="2026-01-24 08:47:29.10587782 +0000 UTC m=+3987.825751108" observedRunningTime="2026-01-24 08:47:30.020894147 +0000 UTC m=+3988.740767435" watchObservedRunningTime="2026-01-24 08:47:30.024878999 +0000 UTC m=+3988.744752287" Jan 24 08:47:40 crc kubenswrapper[4705]: I0124 08:47:40.575756 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:47:41 crc kubenswrapper[4705]: I0124 08:47:41.103711 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"e3c6ce56da70674f16b0913201212d84d13738616df582214bce280d71f18159"} Jan 24 08:47:46 crc kubenswrapper[4705]: I0124 08:47:46.151132 4705 generic.go:334] "Generic (PLEG): container finished" podID="8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c" containerID="75265a1153d819ac1e7bddfdce313fd342664579bc7d0d5ec40dd6588a85457d" exitCode=0 Jan 24 08:47:46 crc kubenswrapper[4705]: I0124 08:47:46.151217 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" event={"ID":"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c","Type":"ContainerDied","Data":"75265a1153d819ac1e7bddfdce313fd342664579bc7d0d5ec40dd6588a85457d"} Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.562299 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.596495 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bsz5/crc-debug-2mrmk"] Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.603650 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bsz5/crc-debug-2mrmk"] Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.645876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrb7\" (UniqueName: \"kubernetes.io/projected/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-kube-api-access-jtrb7\") pod \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.645937 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-host\") pod \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\" (UID: \"8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c\") " Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.646129 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-host" (OuterVolumeSpecName: "host") pod "8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c" (UID: "8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.646626 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-host\") on node \"crc\" DevicePath \"\"" Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.662240 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-kube-api-access-jtrb7" (OuterVolumeSpecName: "kube-api-access-jtrb7") pod "8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c" (UID: "8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c"). InnerVolumeSpecName "kube-api-access-jtrb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:47:47 crc kubenswrapper[4705]: I0124 08:47:47.748384 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtrb7\" (UniqueName: \"kubernetes.io/projected/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c-kube-api-access-jtrb7\") on node \"crc\" DevicePath \"\"" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.172803 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12cf639f8daeb2d9a5a2cb52baa72628c9b6574882c1b1e18f636491c0ade481" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.172940 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-2mrmk" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.803489 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bsz5/crc-debug-46trk"] Jan 24 08:47:48 crc kubenswrapper[4705]: E0124 08:47:48.804047 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c" containerName="container-00" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.804063 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c" containerName="container-00" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.804336 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c" containerName="container-00" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.805311 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.875887 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvk9\" (UniqueName: \"kubernetes.io/projected/1b33250e-db87-4a52-9f1d-1969723298b0-kube-api-access-5lvk9\") pod \"crc-debug-46trk\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.876196 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b33250e-db87-4a52-9f1d-1969723298b0-host\") pod \"crc-debug-46trk\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.978089 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvk9\" (UniqueName: \"kubernetes.io/projected/1b33250e-db87-4a52-9f1d-1969723298b0-kube-api-access-5lvk9\") pod \"crc-debug-46trk\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.978192 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b33250e-db87-4a52-9f1d-1969723298b0-host\") pod \"crc-debug-46trk\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:48 crc kubenswrapper[4705]: I0124 08:47:48.978330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b33250e-db87-4a52-9f1d-1969723298b0-host\") pod \"crc-debug-46trk\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:49 crc kubenswrapper[4705]: I0124 08:47:49.341418 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvk9\" (UniqueName: \"kubernetes.io/projected/1b33250e-db87-4a52-9f1d-1969723298b0-kube-api-access-5lvk9\") pod \"crc-debug-46trk\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:49 crc kubenswrapper[4705]: I0124 08:47:49.425477 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:49 crc kubenswrapper[4705]: I0124 08:47:49.593030 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c" path="/var/lib/kubelet/pods/8fa46d7d-f8b0-4797-9332-1e6d2c1b8e3c/volumes" Jan 24 08:47:50 crc kubenswrapper[4705]: I0124 08:47:50.192579 4705 generic.go:334] "Generic (PLEG): container finished" podID="1b33250e-db87-4a52-9f1d-1969723298b0" containerID="d177490798a5323171acd805d50103ad6f68449613588e5c44ad7de5303e3a76" exitCode=1 Jan 24 08:47:50 crc kubenswrapper[4705]: I0124 08:47:50.193054 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/crc-debug-46trk" event={"ID":"1b33250e-db87-4a52-9f1d-1969723298b0","Type":"ContainerDied","Data":"d177490798a5323171acd805d50103ad6f68449613588e5c44ad7de5303e3a76"} Jan 24 08:47:50 crc kubenswrapper[4705]: I0124 08:47:50.193927 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/crc-debug-46trk" event={"ID":"1b33250e-db87-4a52-9f1d-1969723298b0","Type":"ContainerStarted","Data":"54b5471b9a9965a4d664a093d491232e461e38e3b13cfcb0a67f0eddfa99a809"} Jan 24 08:47:50 crc kubenswrapper[4705]: I0124 08:47:50.252756 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bsz5/crc-debug-46trk"] Jan 24 08:47:50 crc kubenswrapper[4705]: I0124 08:47:50.261545 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bsz5/crc-debug-46trk"] Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.318719 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.426548 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b33250e-db87-4a52-9f1d-1969723298b0-host\") pod \"1b33250e-db87-4a52-9f1d-1969723298b0\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.426629 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b33250e-db87-4a52-9f1d-1969723298b0-host" (OuterVolumeSpecName: "host") pod "1b33250e-db87-4a52-9f1d-1969723298b0" (UID: "1b33250e-db87-4a52-9f1d-1969723298b0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.426896 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvk9\" (UniqueName: \"kubernetes.io/projected/1b33250e-db87-4a52-9f1d-1969723298b0-kube-api-access-5lvk9\") pod \"1b33250e-db87-4a52-9f1d-1969723298b0\" (UID: \"1b33250e-db87-4a52-9f1d-1969723298b0\") " Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.427303 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b33250e-db87-4a52-9f1d-1969723298b0-host\") on node \"crc\" DevicePath \"\"" Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.432124 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b33250e-db87-4a52-9f1d-1969723298b0-kube-api-access-5lvk9" (OuterVolumeSpecName: "kube-api-access-5lvk9") pod "1b33250e-db87-4a52-9f1d-1969723298b0" (UID: "1b33250e-db87-4a52-9f1d-1969723298b0"). InnerVolumeSpecName "kube-api-access-5lvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.529280 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lvk9\" (UniqueName: \"kubernetes.io/projected/1b33250e-db87-4a52-9f1d-1969723298b0-kube-api-access-5lvk9\") on node \"crc\" DevicePath \"\"" Jan 24 08:47:51 crc kubenswrapper[4705]: I0124 08:47:51.588518 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b33250e-db87-4a52-9f1d-1969723298b0" path="/var/lib/kubelet/pods/1b33250e-db87-4a52-9f1d-1969723298b0/volumes" Jan 24 08:47:52 crc kubenswrapper[4705]: I0124 08:47:52.216239 4705 scope.go:117] "RemoveContainer" containerID="d177490798a5323171acd805d50103ad6f68449613588e5c44ad7de5303e3a76" Jan 24 08:47:52 crc kubenswrapper[4705]: I0124 08:47:52.216366 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/crc-debug-46trk" Jan 24 08:48:50 crc kubenswrapper[4705]: I0124 08:48:50.754719 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/init-config-reloader/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.031165 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/init-config-reloader/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.070350 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/alertmanager/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.099980 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/config-reloader/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.270013 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-api/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.274541 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-evaluator/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.305980 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-listener/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.422537 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-notifier/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.458514 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f654f7bb-65t7w_9cefd3d6-3762-41d6-adc7-31134fde2bb7/barbican-api/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.489074 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f654f7bb-65t7w_9cefd3d6-3762-41d6-adc7-31134fde2bb7/barbican-api-log/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.654626 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dbf49b754-xk8bz_85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e/barbican-keystone-listener/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.739034 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dbf49b754-xk8bz_85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e/barbican-keystone-listener-log/0.log" Jan 24 08:48:51 crc kubenswrapper[4705]: I0124 08:48:51.969249 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6698559bb9-vn9c8_6328de33-ec5c-402a-aece-9b944c259b59/barbican-worker-log/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.084098 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6698559bb9-vn9c8_6328de33-ec5c-402a-aece-9b944c259b59/barbican-worker/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.261754 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t_67b8ef17-3a9a-4ebc-af02-eb475e2304af/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.379575 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/ceilometer-central-agent/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.382164 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/ceilometer-notification-agent/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.539390 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/proxy-httpd/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.549981 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/sg-core/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.716384 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6b3b0e00-82d8-4096-80b1-a9edffb3cdaf/cinder-api/0.log" Jan 24 08:48:52 crc kubenswrapper[4705]: I0124 08:48:52.798607 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6b3b0e00-82d8-4096-80b1-a9edffb3cdaf/cinder-api-log/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.088071 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_76eadf8b-3ddc-461f-b8d6-87978146e077/cinder-scheduler/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.090012 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_76eadf8b-3ddc-461f-b8d6-87978146e077/probe/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.155143 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj_f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.284548 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx_b3f04082-08d1-49ca-91fd-b538d81a8923/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.372468 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-ks5p8_333dd8c4-e753-48ab-be34-640378c23251/init/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.583436 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-ks5p8_333dd8c4-e753-48ab-be34-640378c23251/init/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.607318 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-ks5p8_333dd8c4-e753-48ab-be34-640378c23251/dnsmasq-dns/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.659847 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d_f92be2f8-1ff3-4237-8046-ff1352af1bef/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.839874 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_881a6a33-1c19-4868-b1d8-ff8efde83513/glance-log/0.log" Jan 24 08:48:53 crc kubenswrapper[4705]: I0124 08:48:53.857216 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_881a6a33-1c19-4868-b1d8-ff8efde83513/glance-httpd/0.log" Jan 24 08:48:54 crc kubenswrapper[4705]: I0124 08:48:54.044877 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2ddfb089-24fd-436d-9f98-df7b3933d5f1/glance-log/0.log" Jan 24 08:48:54 crc kubenswrapper[4705]: I0124 08:48:54.056770 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2ddfb089-24fd-436d-9f98-df7b3933d5f1/glance-httpd/0.log" Jan 24 08:48:54 crc kubenswrapper[4705]: I0124 08:48:54.515183 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6bbd698cdd-bj25j_bb805121-ae65-457e-877f-2db0ae5e61dc/heat-api/0.log" Jan 24 08:48:54 crc kubenswrapper[4705]: I0124 08:48:54.584620 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7798c79c68-jdzb7_d28ccc81-d764-4810-a649-42ff56ae43c8/heat-engine/0.log" Jan 24 08:48:54 crc kubenswrapper[4705]: I0124 08:48:54.618669 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-67559d7f8-s8rzr_13afed15-05ec-4ae6-a29f-5c9226770a19/heat-cfnapi/0.log" Jan 24 08:48:54 crc kubenswrapper[4705]: I0124 08:48:54.768037 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-565zf_69797704-2611-4e94-8321-878049b18d9e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:54 crc kubenswrapper[4705]: I0124 08:48:54.887260 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-5thzw_751e9e62-9148-48d5-9630-158d42b6b78d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:55 crc kubenswrapper[4705]: I0124 08:48:55.074995 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b72fb68d-e944-4d99-b1d0-eb097c807e14/kube-state-metrics/0.log" Jan 24 08:48:55 crc kubenswrapper[4705]: I0124 08:48:55.178680 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5c7b7bd5d5-rdcz5_d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2/keystone-api/0.log" Jan 24 08:48:55 crc kubenswrapper[4705]: I0124 08:48:55.333726 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs_892c0147-b4a3-451d-9c4c-c2a0cb3cf56e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:55 crc kubenswrapper[4705]: I0124 08:48:55.504461 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcd878cb5-xnt7l_def20def-8ec8-4bb9-9c58-c557b1610ae9/neutron-api/0.log" Jan 24 08:48:55 crc kubenswrapper[4705]: I0124 08:48:55.611228 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcd878cb5-xnt7l_def20def-8ec8-4bb9-9c58-c557b1610ae9/neutron-httpd/0.log" Jan 24 08:48:55 crc kubenswrapper[4705]: I0124 08:48:55.964057 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz_3f03fcbf-053b-4f3b-b96a-e7f325f36a0a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:56 crc kubenswrapper[4705]: I0124 08:48:56.286064 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bdffe46c-ac47-422d-aec3-896fa1575ca7/nova-api-log/0.log" Jan 24 08:48:56 crc kubenswrapper[4705]: I0124 08:48:56.362647 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7e941386-bff0-4fd5-a452-0f659b35eae9/nova-cell0-conductor-conductor/0.log" Jan 24 08:48:56 crc kubenswrapper[4705]: I0124 08:48:56.457267 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bdffe46c-ac47-422d-aec3-896fa1575ca7/nova-api-api/0.log" Jan 24 08:48:56 crc kubenswrapper[4705]: I0124 08:48:56.585229 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0d7a0724-6bfc-440e-958b-28313c59010d/nova-cell1-conductor-conductor/0.log" Jan 24 08:48:56 crc kubenswrapper[4705]: I0124 08:48:56.712866 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_90e3deeb-1218-4c9b-9e33-3e720ca605bc/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 08:48:56 crc kubenswrapper[4705]: I0124 08:48:56.840777 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rmhql_392633fe-e467-4669-9773-89b44ed68ac6/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:57 crc kubenswrapper[4705]: I0124 08:48:57.041604 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbcacec9-3f9e-488c-846b-708af727b753/nova-metadata-log/0.log" Jan 24 08:48:57 crc kubenswrapper[4705]: I0124 08:48:57.202798 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba8d3653-1ade-4d27-a7fa-06e616ffe7f2/nova-scheduler-scheduler/0.log" Jan 24 08:48:57 crc kubenswrapper[4705]: I0124 08:48:57.275654 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b/mysql-bootstrap/0.log" Jan 24 08:48:57 crc kubenswrapper[4705]: I0124 08:48:57.427462 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b/mysql-bootstrap/0.log" Jan 24 08:48:57 crc kubenswrapper[4705]: I0124 08:48:57.470743 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b/galera/0.log" Jan 24 08:48:57 crc kubenswrapper[4705]: I0124 08:48:57.661215 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_95a51efd-0ac3-4c02-8052-5b4017444820/mysql-bootstrap/0.log" Jan 24 08:48:57 crc kubenswrapper[4705]: I0124 08:48:57.948582 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_95a51efd-0ac3-4c02-8052-5b4017444820/mysql-bootstrap/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.008806 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_95a51efd-0ac3-4c02-8052-5b4017444820/galera/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.177395 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5bf2f8d1-1a23-4328-9169-1dea01964d94/openstackclient/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.275042 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dqhqz_e50e3aa7-48d0-4559-9f09-f0a9a54232a7/ovn-controller/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.382737 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbcacec9-3f9e-488c-846b-708af727b753/nova-metadata-metadata/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.501428 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hp2mc_a80046d0-b499-49e8-98aa-78869a5f0482/openstack-network-exporter/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.586424 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovsdb-server-init/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.821869 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovsdb-server-init/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.837360 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovsdb-server/0.log" Jan 24 08:48:58 crc kubenswrapper[4705]: I0124 08:48:58.899207 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovs-vswitchd/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.065814 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bfqnl_1dfe7e12-9632-436a-b440-02c0f710ca04/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.108580 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89e4ed86-cfcf-457e-bca5-29d0001a7785/openstack-network-exporter/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.202533 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89e4ed86-cfcf-457e-bca5-29d0001a7785/ovn-northd/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.287086 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ac239835-9243-4353-8ca5-ff79405c5009/openstack-network-exporter/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.415882 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ac239835-9243-4353-8ca5-ff79405c5009/ovsdbserver-nb/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.492075 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6/ovsdbserver-sb/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.495025 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6/openstack-network-exporter/0.log" Jan 24 08:48:59 crc kubenswrapper[4705]: I0124 08:48:59.719882 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55877bd6d-swpx2_e650ce3a-8142-469f-bb17-116626c2141b/placement-api/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.098239 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55877bd6d-swpx2_e650ce3a-8142-469f-bb17-116626c2141b/placement-log/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.112321 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/init-config-reloader/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.305091 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/prometheus/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.378764 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/thanos-sidecar/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.412144 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/config-reloader/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.417923 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/init-config-reloader/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.619104 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42a4eca6-7e02-48d4-a187-ea503285c378/setup-container/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.795483 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42a4eca6-7e02-48d4-a187-ea503285c378/setup-container/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.873666 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_203f66be-7cf6-4664-a0a8-9ed975352414/setup-container/0.log" Jan 24 08:49:00 crc kubenswrapper[4705]: I0124 08:49:00.940720 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42a4eca6-7e02-48d4-a187-ea503285c378/rabbitmq/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.052599 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_203f66be-7cf6-4664-a0a8-9ed975352414/setup-container/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.176179 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_203f66be-7cf6-4664-a0a8-9ed975352414/rabbitmq/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.197802 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd_a0eb2e96-4e56-4c71-a977-7b27892ba77c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.416514 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xfn7j_3579a044-5429-43aa-be25-6720cbb84d82/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.416937 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr_2b3c2835-0838-4592-ae5c-9d442ad0e351/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.604343 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xdc7k_8c2d1fb0-3187-4f07-bc44-d3c81689b09e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.751764 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pqr72_23b20cce-9e55-4a5e-b3ba-72526a662b7d/ssh-known-hosts-edpm-deployment/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.926293 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58599c4547-sbsm4_1cce5e47-bb96-4468-8818-29869d013b7b/proxy-server/0.log" Jan 24 08:49:01 crc kubenswrapper[4705]: I0124 08:49:01.996143 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58599c4547-sbsm4_1cce5e47-bb96-4468-8818-29869d013b7b/proxy-httpd/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.296378 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wvgxh_71ae95bb-0592-4ebd-b74a-c2ed2cc5654e/swift-ring-rebalance/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.363742 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-auditor/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.416335 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-reaper/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.613493 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-replicator/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.621032 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-auditor/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.649973 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-server/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.727603 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-replicator/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.796258 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-server/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.873116 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-updater/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.936569 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-auditor/0.log" Jan 24 08:49:02 crc kubenswrapper[4705]: I0124 08:49:02.959024 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-expirer/0.log" Jan 24 08:49:03 crc kubenswrapper[4705]: I0124 08:49:03.034054 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-replicator/0.log" Jan 24 08:49:03 crc kubenswrapper[4705]: I0124 08:49:03.113582 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-server/0.log" Jan 24 08:49:03 crc kubenswrapper[4705]: I0124 08:49:03.135403 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-updater/0.log" Jan 24 08:49:03 crc kubenswrapper[4705]: I0124 08:49:03.267654 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/swift-recon-cron/0.log" Jan 24 08:49:03 crc kubenswrapper[4705]: I0124 08:49:03.274155 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/rsync/0.log" Jan 24 08:49:03 crc kubenswrapper[4705]: I0124 08:49:03.487803 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p_ca587b10-b782-4dd1-a3fa-e9dfd773a2e3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:49:03 crc kubenswrapper[4705]: I0124 08:49:03.718153 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-z4xps_e6982665-cdec-4e7c-b9d1-0c7532cf8830/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:49:09 crc kubenswrapper[4705]: I0124 08:49:09.628200 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ec9c2213-448d-4532-b6a6-3f6242f5ab5f/memcached/0.log" Jan 24 08:49:29 crc kubenswrapper[4705]: I0124 08:49:29.423973 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/util/0.log" Jan 24 08:49:29 crc kubenswrapper[4705]: I0124 08:49:29.633366 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/util/0.log" Jan 24 08:49:29 crc kubenswrapper[4705]: I0124 08:49:29.646635 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/pull/0.log" Jan 24 08:49:29 crc kubenswrapper[4705]: I0124 08:49:29.670240 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/pull/0.log" Jan 24 08:49:29 crc kubenswrapper[4705]: I0124 08:49:29.813893 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/util/0.log" Jan 24 08:49:29 crc kubenswrapper[4705]: I0124 08:49:29.860143 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/extract/0.log" Jan 24 08:49:29 crc kubenswrapper[4705]: I0124 08:49:29.865600 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/pull/0.log" Jan 24 08:49:30 crc kubenswrapper[4705]: I0124 08:49:30.115627 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-nf4zc_0a119afa-9520-46bc-8fde-0b2974035e48/manager/0.log" Jan 24 08:49:30 crc kubenswrapper[4705]: I0124 08:49:30.127401 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-r5j5v_652fc521-e0f0-4d0c-8ca3-8077222ab892/manager/0.log" Jan 24 08:49:30 crc kubenswrapper[4705]: I0124 08:49:30.291434 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-dbvkx_91182c35-90b8-409a-ac96-191c754f5c9d/manager/0.log" Jan 24 08:49:30 crc kubenswrapper[4705]: I0124 08:49:30.409618 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-xsz7p_93151962-475c-412e-98d3-7363d8fd5f6c/manager/0.log" Jan 24 08:49:30 crc kubenswrapper[4705]: I0124 08:49:30.553909 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-sjc8r_338f4812-65cb-4a3e-a83e-73a72e4f31eb/manager/0.log" Jan 24 08:49:30 crc kubenswrapper[4705]: I0124 08:49:30.574026 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sj4dw_be549f5c-a477-4e7d-a928-0e9885ffa225/manager/0.log" Jan 24 08:49:30 crc kubenswrapper[4705]: I0124 08:49:30.868931 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-v629x_241de282-17c7-48c1-b4cb-fbeb9b98bd08/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.027876 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-l4fkg_bef91cd6-2f77-474f-8258-e23ca5b37091/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.075061 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-tgzdf_23f7495d-06eb-45e5-b5e6-e50169760b0b/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.100688 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-s86rp_fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.265405 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-x5h78_49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.345196 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-mpgjf_bf85561a-7710-4a15-b4b1-c8f48e50dc53/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.538961 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-drzkh_b14e84b5-9dcb-4280-9480-a6f34bf8c8dd/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.547425 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-869gl_3c52d864-16a1-4eb6-80e9-ac7e5009bbd9/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.818188 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz_5a7f4747-1fd9-4aa3-b954-e32101ebe927/manager/0.log" Jan 24 08:49:31 crc kubenswrapper[4705]: I0124 08:49:31.818256 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5f778d85fb-56s2d_1c03ba2e-ee1e-4afc-8f97-84439ceec36d/operator/0.log" Jan 24 08:49:32 crc kubenswrapper[4705]: I0124 08:49:32.056341 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fnqzn_508301de-d491-4dbb-9f4b-c2732d5007eb/registry-server/0.log" Jan 24 08:49:32 crc kubenswrapper[4705]: I0124 08:49:32.447659 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-k8q6j_eb05abb5-cee5-4e0d-9217-6154aebe5836/manager/0.log" Jan 24 08:49:32 crc kubenswrapper[4705]: I0124 08:49:32.453769 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6445j_2e973d30-3868-4922-b576-12587d46810a/manager/0.log" Jan 24 08:49:32 crc kubenswrapper[4705]: I0124 08:49:32.715409 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-d4kz9_3fac85c2-ff36-44a9-ae92-947df3332178/operator/0.log" Jan 24 08:49:32 crc kubenswrapper[4705]: I0124 08:49:32.925365 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-f2vcw_404be92b-a12e-42d7-868f-adf825bc7c68/manager/0.log" Jan 24 08:49:33 crc kubenswrapper[4705]: I0124 08:49:33.182399 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:49:33 crc kubenswrapper[4705]: I0124 08:49:33.309246 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-c6x57_a08c7b5c-356a-4a05-a600-82f6bf5aad91/manager/0.log" Jan 24 08:49:33 crc kubenswrapper[4705]: I0124 08:49:33.408246 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-5nngs_0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36/manager/0.log" Jan 24 08:49:33 crc kubenswrapper[4705]: I0124 08:49:33.639461 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8d6967975-rkwgg_51813a09-552b-4f12-904a-840cf6829c80/manager/0.log" Jan 24 08:49:53 crc kubenswrapper[4705]: I0124 08:49:53.067685 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7cvlf_a84e98fc-8911-4fe1-8242-e906ccfdb277/control-plane-machine-set-operator/0.log" Jan 24 08:49:53 crc kubenswrapper[4705]: I0124 08:49:53.197325 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqzgc_1467a368-ffe2-4fd5-abca-e42018890e40/kube-rbac-proxy/0.log" Jan 24 08:49:53 crc kubenswrapper[4705]: I0124 08:49:53.201434 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqzgc_1467a368-ffe2-4fd5-abca-e42018890e40/machine-api-operator/0.log" Jan 24 08:50:06 crc kubenswrapper[4705]: I0124 08:50:06.888088 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7d248_b9303a69-3000-46da-a5eb-4c08989db796/cert-manager-controller/0.log" Jan 24 08:50:07 crc kubenswrapper[4705]: I0124 08:50:07.071617 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:50:07 crc kubenswrapper[4705]: I0124 08:50:07.071691 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:50:07 crc kubenswrapper[4705]: I0124 08:50:07.089481 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xm7cv_5fd12a13-2f0a-45ca-99d8-87e45f8f5743/cert-manager-cainjector/0.log" Jan 24 08:50:07 crc kubenswrapper[4705]: I0124 08:50:07.138267 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dz4zc_0d0523a0-74f2-455b-be13-f2c764d4b4e3/cert-manager-webhook/0.log" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.279998 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6tc2"] Jan 24 08:50:08 crc kubenswrapper[4705]: E0124 08:50:08.281317 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b33250e-db87-4a52-9f1d-1969723298b0" containerName="container-00" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.281659 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b33250e-db87-4a52-9f1d-1969723298b0" containerName="container-00" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.282035 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b33250e-db87-4a52-9f1d-1969723298b0" containerName="container-00" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.283743 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.290586 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6tc2"] Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.412452 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-utilities\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.412901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gft9\" (UniqueName: \"kubernetes.io/projected/c5604410-962a-4812-80e9-c8aaf3726ea9-kube-api-access-5gft9\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.413706 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-catalog-content\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.516096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-catalog-content\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.516250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-utilities\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.516495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gft9\" (UniqueName: \"kubernetes.io/projected/c5604410-962a-4812-80e9-c8aaf3726ea9-kube-api-access-5gft9\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.516680 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-catalog-content\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.516803 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-utilities\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.537707 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gft9\" (UniqueName: \"kubernetes.io/projected/c5604410-962a-4812-80e9-c8aaf3726ea9-kube-api-access-5gft9\") pod \"community-operators-q6tc2\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:08 crc kubenswrapper[4705]: I0124 08:50:08.609929 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:09 crc kubenswrapper[4705]: I0124 08:50:09.229840 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6tc2"] Jan 24 08:50:09 crc kubenswrapper[4705]: W0124 08:50:09.239847 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5604410_962a_4812_80e9_c8aaf3726ea9.slice/crio-c2395e5b9a03831d1e618216e4dd65f9d9cacd489340a73a19280e204ed9714a WatchSource:0}: Error finding container c2395e5b9a03831d1e618216e4dd65f9d9cacd489340a73a19280e204ed9714a: Status 404 returned error can't find the container with id c2395e5b9a03831d1e618216e4dd65f9d9cacd489340a73a19280e204ed9714a Jan 24 08:50:09 crc kubenswrapper[4705]: I0124 08:50:09.607604 4705 generic.go:334] "Generic (PLEG): container finished" podID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerID="1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421" exitCode=0 Jan 24 08:50:09 crc kubenswrapper[4705]: I0124 08:50:09.607705 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6tc2" event={"ID":"c5604410-962a-4812-80e9-c8aaf3726ea9","Type":"ContainerDied","Data":"1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421"} Jan 24 08:50:09 crc kubenswrapper[4705]: I0124 08:50:09.607874 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6tc2" event={"ID":"c5604410-962a-4812-80e9-c8aaf3726ea9","Type":"ContainerStarted","Data":"c2395e5b9a03831d1e618216e4dd65f9d9cacd489340a73a19280e204ed9714a"} Jan 24 08:50:10 crc kubenswrapper[4705]: I0124 08:50:10.620001 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6tc2" event={"ID":"c5604410-962a-4812-80e9-c8aaf3726ea9","Type":"ContainerStarted","Data":"26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7"} Jan 24 08:50:11 crc kubenswrapper[4705]: I0124 08:50:11.631700 4705 generic.go:334] "Generic (PLEG): container finished" podID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerID="26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7" exitCode=0 Jan 24 08:50:11 crc kubenswrapper[4705]: I0124 08:50:11.631798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6tc2" event={"ID":"c5604410-962a-4812-80e9-c8aaf3726ea9","Type":"ContainerDied","Data":"26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7"} Jan 24 08:50:12 crc kubenswrapper[4705]: I0124 08:50:12.644379 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6tc2" event={"ID":"c5604410-962a-4812-80e9-c8aaf3726ea9","Type":"ContainerStarted","Data":"fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816"} Jan 24 08:50:12 crc kubenswrapper[4705]: I0124 08:50:12.667868 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6tc2" podStartSLOduration=2.125023046 podStartE2EDuration="4.66784327s" podCreationTimestamp="2026-01-24 08:50:08 +0000 UTC" firstStartedPulling="2026-01-24 08:50:09.609952989 +0000 UTC m=+4148.329826287" lastFinishedPulling="2026-01-24 08:50:12.152773223 +0000 UTC m=+4150.872646511" observedRunningTime="2026-01-24 08:50:12.664768523 +0000 UTC m=+4151.384641841" watchObservedRunningTime="2026-01-24 08:50:12.66784327 +0000 UTC m=+4151.387716569" Jan 24 08:50:18 crc kubenswrapper[4705]: I0124 08:50:18.610468 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:18 crc kubenswrapper[4705]: I0124 08:50:18.611266 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:18 crc kubenswrapper[4705]: I0124 08:50:18.733312 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:18 crc kubenswrapper[4705]: I0124 08:50:18.780603 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:20 crc kubenswrapper[4705]: I0124 08:50:20.776902 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zc7nf_54c5f862-8725-4f20-9624-c854d2b48634/nmstate-console-plugin/0.log" Jan 24 08:50:21 crc kubenswrapper[4705]: I0124 08:50:21.006517 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-79zvw_8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb/nmstate-handler/0.log" Jan 24 08:50:21 crc kubenswrapper[4705]: I0124 08:50:21.114018 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-pfqmg_f63e787b-b789-4a2f-a0f4-fa433cefe73c/kube-rbac-proxy/0.log" Jan 24 08:50:21 crc kubenswrapper[4705]: I0124 08:50:21.189492 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-pfqmg_f63e787b-b789-4a2f-a0f4-fa433cefe73c/nmstate-metrics/0.log" Jan 24 08:50:21 crc kubenswrapper[4705]: I0124 08:50:21.201193 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hn9tt_0bf1a582-10a6-4207-a953-16b7751ea5ef/nmstate-operator/0.log" Jan 24 08:50:21 crc kubenswrapper[4705]: I0124 08:50:21.426658 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-h4stc_f9ff4190-6e7e-4e11-8287-1f8c6aa35088/nmstate-webhook/0.log" Jan 24 08:50:21 crc kubenswrapper[4705]: I0124 08:50:21.660685 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6tc2"] Jan 24 08:50:21 crc kubenswrapper[4705]: I0124 08:50:21.661172 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6tc2" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="registry-server" containerID="cri-o://fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816" gracePeriod=2 Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.780034 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.783915 4705 generic.go:334] "Generic (PLEG): container finished" podID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerID="fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816" exitCode=0 Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.783960 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6tc2" event={"ID":"c5604410-962a-4812-80e9-c8aaf3726ea9","Type":"ContainerDied","Data":"fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816"} Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.783992 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6tc2" event={"ID":"c5604410-962a-4812-80e9-c8aaf3726ea9","Type":"ContainerDied","Data":"c2395e5b9a03831d1e618216e4dd65f9d9cacd489340a73a19280e204ed9714a"} Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.784019 4705 scope.go:117] "RemoveContainer" containerID="fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.831721 4705 scope.go:117] "RemoveContainer" containerID="26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.875590 4705 scope.go:117] "RemoveContainer" containerID="1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.909032 4705 scope.go:117] "RemoveContainer" containerID="fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816" Jan 24 08:50:22 crc kubenswrapper[4705]: E0124 08:50:22.909542 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816\": container with ID starting with fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816 not found: ID does not exist" containerID="fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.909598 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816"} err="failed to get container status \"fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816\": rpc error: code = NotFound desc = could not find container \"fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816\": container with ID starting with fce1fce7ba99601f5df19a7378a52bb7afdd76bfe9c3e1f906efbe8e7e652816 not found: ID does not exist" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.909629 4705 scope.go:117] "RemoveContainer" containerID="26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7" Jan 24 08:50:22 crc kubenswrapper[4705]: E0124 08:50:22.910710 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7\": container with ID starting with 26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7 not found: ID does not exist" containerID="26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.910740 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7"} err="failed to get container status \"26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7\": rpc error: code = NotFound desc = could not find container \"26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7\": container with ID starting with 26773fc190314ff4c6dd0ad75a242c1adc1927f7f553849226ff56deeacd52d7 not found: ID does not exist" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.910755 4705 scope.go:117] "RemoveContainer" containerID="1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421" Jan 24 08:50:22 crc kubenswrapper[4705]: E0124 08:50:22.911319 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421\": container with ID starting with 1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421 not found: ID does not exist" containerID="1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.911393 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421"} err="failed to get container status \"1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421\": rpc error: code = NotFound desc = could not find container \"1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421\": container with ID starting with 1de4cd27de56b4537457dc49ff7847c857c01498ad70f6b3171fd685a7d98421 not found: ID does not exist" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.965925 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gft9\" (UniqueName: \"kubernetes.io/projected/c5604410-962a-4812-80e9-c8aaf3726ea9-kube-api-access-5gft9\") pod \"c5604410-962a-4812-80e9-c8aaf3726ea9\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.966222 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-utilities\") pod \"c5604410-962a-4812-80e9-c8aaf3726ea9\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.966245 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-catalog-content\") pod \"c5604410-962a-4812-80e9-c8aaf3726ea9\" (UID: \"c5604410-962a-4812-80e9-c8aaf3726ea9\") " Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.967406 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-utilities" (OuterVolumeSpecName: "utilities") pod "c5604410-962a-4812-80e9-c8aaf3726ea9" (UID: "c5604410-962a-4812-80e9-c8aaf3726ea9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:50:22 crc kubenswrapper[4705]: I0124 08:50:22.971899 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5604410-962a-4812-80e9-c8aaf3726ea9-kube-api-access-5gft9" (OuterVolumeSpecName: "kube-api-access-5gft9") pod "c5604410-962a-4812-80e9-c8aaf3726ea9" (UID: "c5604410-962a-4812-80e9-c8aaf3726ea9"). InnerVolumeSpecName "kube-api-access-5gft9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:50:23 crc kubenswrapper[4705]: I0124 08:50:23.016972 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5604410-962a-4812-80e9-c8aaf3726ea9" (UID: "c5604410-962a-4812-80e9-c8aaf3726ea9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:50:23 crc kubenswrapper[4705]: I0124 08:50:23.067990 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:50:23 crc kubenswrapper[4705]: I0124 08:50:23.068018 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5604410-962a-4812-80e9-c8aaf3726ea9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:50:23 crc kubenswrapper[4705]: I0124 08:50:23.068032 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gft9\" (UniqueName: \"kubernetes.io/projected/c5604410-962a-4812-80e9-c8aaf3726ea9-kube-api-access-5gft9\") on node \"crc\" DevicePath \"\"" Jan 24 08:50:23 crc kubenswrapper[4705]: I0124 08:50:23.795152 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6tc2" Jan 24 08:50:23 crc kubenswrapper[4705]: I0124 08:50:23.820447 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6tc2"] Jan 24 08:50:23 crc kubenswrapper[4705]: I0124 08:50:23.829645 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6tc2"] Jan 24 08:50:25 crc kubenswrapper[4705]: I0124 08:50:25.587875 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" path="/var/lib/kubelet/pods/c5604410-962a-4812-80e9-c8aaf3726ea9/volumes" Jan 24 08:50:35 crc kubenswrapper[4705]: I0124 08:50:35.401256 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q6rz4_205fc2d4-b488-4221-b24e-c97e1447deb9/prometheus-operator/0.log" Jan 24 08:50:35 crc kubenswrapper[4705]: I0124 08:50:35.594581 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf_b32cfae6-0b9f-4565-b802-c667cc6def0a/prometheus-operator-admission-webhook/0.log" Jan 24 08:50:35 crc kubenswrapper[4705]: I0124 08:50:35.675736 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v_3747c1cc-2cec-4baf-b6f2-14109753b841/prometheus-operator-admission-webhook/0.log" Jan 24 08:50:35 crc kubenswrapper[4705]: I0124 08:50:35.826141 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hjjgh_348d157c-9094-4f31-aadf-44f7a46f561b/operator/0.log" Jan 24 08:50:35 crc kubenswrapper[4705]: I0124 08:50:35.898669 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l6h9z_3da07060-d23f-4ecc-9a3c-9d659a0ab121/perses-operator/0.log" Jan 24 08:50:37 crc kubenswrapper[4705]: I0124 08:50:37.071504 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:50:37 crc kubenswrapper[4705]: I0124 08:50:37.072050 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.012247 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zq9jr_24055761-2526-4195-98fd-ba2b83bc9f20/controller/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.028200 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zq9jr_24055761-2526-4195-98fd-ba2b83bc9f20/kube-rbac-proxy/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.203311 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.385665 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.398295 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.430942 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.457209 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.620893 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.739570 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.739760 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.746255 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.935264 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.949473 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/controller/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.955155 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 08:50:53 crc kubenswrapper[4705]: I0124 08:50:53.971940 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 08:50:54 crc kubenswrapper[4705]: I0124 08:50:54.137959 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/frr-metrics/0.log" Jan 24 08:50:54 crc kubenswrapper[4705]: I0124 08:50:54.142252 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/kube-rbac-proxy/0.log" Jan 24 08:50:54 crc kubenswrapper[4705]: I0124 08:50:54.189729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/kube-rbac-proxy-frr/0.log" Jan 24 08:50:54 crc kubenswrapper[4705]: I0124 08:50:54.368570 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/reloader/0.log" Jan 24 08:50:54 crc kubenswrapper[4705]: I0124 08:50:54.416525 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-47zdb_7ed76590-151f-416d-b485-dd0ec7a67fcc/frr-k8s-webhook-server/0.log" Jan 24 08:50:54 crc kubenswrapper[4705]: I0124 08:50:54.919566 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b969cdf7-4cn5k_6709cabe-fa28-43e6-9999-2da688ab6871/manager/0.log" Jan 24 08:50:55 crc kubenswrapper[4705]: I0124 08:50:55.282942 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bfbbd9768-tw7cf_67f67aca-78d5-495d-a47d-ce2fdefc502b/webhook-server/0.log" Jan 24 08:50:55 crc kubenswrapper[4705]: I0124 08:50:55.390542 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ff8n7_b43877b9-1325-4a18-abe6-0aea41048802/kube-rbac-proxy/0.log" Jan 24 08:50:55 crc kubenswrapper[4705]: I0124 08:50:55.582008 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/frr/0.log" Jan 24 08:50:55 crc kubenswrapper[4705]: I0124 08:50:55.939950 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ff8n7_b43877b9-1325-4a18-abe6-0aea41048802/speaker/0.log" Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.071551 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.072325 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.072423 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.073004 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3c6ce56da70674f16b0913201212d84d13738616df582214bce280d71f18159"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.073059 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://e3c6ce56da70674f16b0913201212d84d13738616df582214bce280d71f18159" gracePeriod=600 Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.490856 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="e3c6ce56da70674f16b0913201212d84d13738616df582214bce280d71f18159" exitCode=0 Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.490943 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"e3c6ce56da70674f16b0913201212d84d13738616df582214bce280d71f18159"} Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.491272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c"} Jan 24 08:51:07 crc kubenswrapper[4705]: I0124 08:51:07.491336 4705 scope.go:117] "RemoveContainer" containerID="1bc986065f56df3726c38e489cc802045458eb8750b2711e2efece5ad1b54b73" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.028947 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/util/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.203007 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/pull/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.229189 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/util/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.267585 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/pull/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.451035 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/pull/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.454756 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/util/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.490846 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/extract/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.855756 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/util/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.955962 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/pull/0.log" Jan 24 08:51:12 crc kubenswrapper[4705]: I0124 08:51:12.968958 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/util/0.log" Jan 24 08:51:13 crc kubenswrapper[4705]: I0124 08:51:13.028733 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/pull/0.log" Jan 24 08:51:13 crc kubenswrapper[4705]: I0124 08:51:13.188072 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/util/0.log" Jan 24 08:51:13 crc kubenswrapper[4705]: I0124 08:51:13.235273 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/extract/0.log" Jan 24 08:51:13 crc kubenswrapper[4705]: I0124 08:51:13.236504 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/pull/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.007729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/util/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.237264 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/util/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.266700 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/pull/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.267453 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/pull/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.462025 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/pull/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.507670 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/util/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.535844 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/extract/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.669921 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-utilities/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.843944 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-content/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.854666 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-content/0.log" Jan 24 08:51:14 crc kubenswrapper[4705]: I0124 08:51:14.880678 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-utilities/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.095135 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-content/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.103158 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-utilities/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.255270 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-utilities/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.518849 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-utilities/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.534317 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-content/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.573181 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-content/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.660564 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/registry-server/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.842420 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-utilities/0.log" Jan 24 08:51:15 crc kubenswrapper[4705]: I0124 08:51:15.884606 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-content/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.059627 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-httl9_568a6099-4783-45d9-9ea8-7c856a3ddd86/marketplace-operator/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.102879 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-utilities/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.132660 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/registry-server/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.317167 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-content/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.317707 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-content/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.318291 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-utilities/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.485259 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-utilities/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.508834 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-content/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.557874 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-utilities/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.593693 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/registry-server/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.729009 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-utilities/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.745969 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-content/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.768901 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-content/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.943597 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-utilities/0.log" Jan 24 08:51:16 crc kubenswrapper[4705]: I0124 08:51:16.963786 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-content/0.log" Jan 24 08:51:17 crc kubenswrapper[4705]: I0124 08:51:17.412543 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/registry-server/0.log" Jan 24 08:51:33 crc kubenswrapper[4705]: I0124 08:51:33.230950 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q6rz4_205fc2d4-b488-4221-b24e-c97e1447deb9/prometheus-operator/0.log" Jan 24 08:51:33 crc kubenswrapper[4705]: I0124 08:51:33.237058 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf_b32cfae6-0b9f-4565-b802-c667cc6def0a/prometheus-operator-admission-webhook/0.log" Jan 24 08:51:33 crc kubenswrapper[4705]: I0124 08:51:33.297784 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v_3747c1cc-2cec-4baf-b6f2-14109753b841/prometheus-operator-admission-webhook/0.log" Jan 24 08:51:33 crc kubenswrapper[4705]: I0124 08:51:33.458785 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hjjgh_348d157c-9094-4f31-aadf-44f7a46f561b/operator/0.log" Jan 24 08:51:33 crc kubenswrapper[4705]: I0124 08:51:33.475459 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l6h9z_3da07060-d23f-4ecc-9a3c-9d659a0ab121/perses-operator/0.log" Jan 24 08:51:43 crc kubenswrapper[4705]: E0124 08:51:43.435992 4705 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.15:36378->38.129.56.15:37061: write tcp 38.129.56.15:36378->38.129.56.15:37061: write: broken pipe Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.274216 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6l6b5"] Jan 24 08:53:03 crc kubenswrapper[4705]: E0124 08:53:03.275058 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="extract-utilities" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.275073 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="extract-utilities" Jan 24 08:53:03 crc kubenswrapper[4705]: E0124 08:53:03.275092 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="extract-content" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.275098 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="extract-content" Jan 24 08:53:03 crc kubenswrapper[4705]: E0124 08:53:03.275121 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="registry-server" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.275127 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="registry-server" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.275364 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5604410-962a-4812-80e9-c8aaf3726ea9" containerName="registry-server" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.277321 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.301623 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6l6b5"] Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.408844 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-catalog-content\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.409386 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-utilities\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.409655 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5lm\" (UniqueName: \"kubernetes.io/projected/219334aa-73c9-49a0-990c-9b9554313fb9-kube-api-access-5w5lm\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.512175 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-utilities\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.512350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5lm\" (UniqueName: \"kubernetes.io/projected/219334aa-73c9-49a0-990c-9b9554313fb9-kube-api-access-5w5lm\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.512451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-catalog-content\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.512866 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-utilities\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.513246 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-catalog-content\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.539044 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5lm\" (UniqueName: \"kubernetes.io/projected/219334aa-73c9-49a0-990c-9b9554313fb9-kube-api-access-5w5lm\") pod \"certified-operators-6l6b5\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:03 crc kubenswrapper[4705]: I0124 08:53:03.599929 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:04 crc kubenswrapper[4705]: I0124 08:53:04.306083 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6l6b5"] Jan 24 08:53:04 crc kubenswrapper[4705]: I0124 08:53:04.964671 4705 generic.go:334] "Generic (PLEG): container finished" podID="219334aa-73c9-49a0-990c-9b9554313fb9" containerID="31d3dcd6e47f1946390ea2141770db17ba4df0a6f697cc58b8603feda03a0b09" exitCode=0 Jan 24 08:53:04 crc kubenswrapper[4705]: I0124 08:53:04.964744 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6b5" event={"ID":"219334aa-73c9-49a0-990c-9b9554313fb9","Type":"ContainerDied","Data":"31d3dcd6e47f1946390ea2141770db17ba4df0a6f697cc58b8603feda03a0b09"} Jan 24 08:53:04 crc kubenswrapper[4705]: I0124 08:53:04.965230 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6b5" event={"ID":"219334aa-73c9-49a0-990c-9b9554313fb9","Type":"ContainerStarted","Data":"4139d0fb82ae2ccb1a83828d91e3bab09d627fe9cc7c2fc1fb2533a6044e9792"} Jan 24 08:53:04 crc kubenswrapper[4705]: I0124 08:53:04.976625 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:53:05 crc kubenswrapper[4705]: I0124 08:53:05.986776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6b5" event={"ID":"219334aa-73c9-49a0-990c-9b9554313fb9","Type":"ContainerStarted","Data":"83d4961548abe7bd6c3afea92f9f7158552343f3bf6c574d7ae156428cb7d356"} Jan 24 08:53:06 crc kubenswrapper[4705]: I0124 08:53:06.999034 4705 generic.go:334] "Generic (PLEG): container finished" podID="219334aa-73c9-49a0-990c-9b9554313fb9" containerID="83d4961548abe7bd6c3afea92f9f7158552343f3bf6c574d7ae156428cb7d356" exitCode=0 Jan 24 08:53:06 crc kubenswrapper[4705]: I0124 08:53:06.999081 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6b5" event={"ID":"219334aa-73c9-49a0-990c-9b9554313fb9","Type":"ContainerDied","Data":"83d4961548abe7bd6c3afea92f9f7158552343f3bf6c574d7ae156428cb7d356"} Jan 24 08:53:07 crc kubenswrapper[4705]: I0124 08:53:07.071687 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:53:07 crc kubenswrapper[4705]: I0124 08:53:07.071737 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:53:08 crc kubenswrapper[4705]: I0124 08:53:08.009873 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6b5" event={"ID":"219334aa-73c9-49a0-990c-9b9554313fb9","Type":"ContainerStarted","Data":"e4c4573c563a5edb9a07685cc4b891c802a806cc4b44d487502498f1e57ab7d3"} Jan 24 08:53:13 crc kubenswrapper[4705]: I0124 08:53:13.604989 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:13 crc kubenswrapper[4705]: I0124 08:53:13.605895 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:13 crc kubenswrapper[4705]: I0124 08:53:13.717539 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:13 crc kubenswrapper[4705]: I0124 08:53:13.784443 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6l6b5" podStartSLOduration=8.355890252 podStartE2EDuration="10.784420759s" podCreationTimestamp="2026-01-24 08:53:03 +0000 UTC" firstStartedPulling="2026-01-24 08:53:04.976226518 +0000 UTC m=+4323.696099846" lastFinishedPulling="2026-01-24 08:53:07.404757065 +0000 UTC m=+4326.124630353" observedRunningTime="2026-01-24 08:53:08.035542711 +0000 UTC m=+4326.755415999" watchObservedRunningTime="2026-01-24 08:53:13.784420759 +0000 UTC m=+4332.504294057" Jan 24 08:53:14 crc kubenswrapper[4705]: I0124 08:53:14.125535 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:14 crc kubenswrapper[4705]: I0124 08:53:14.190187 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6l6b5"] Jan 24 08:53:16 crc kubenswrapper[4705]: I0124 08:53:16.248222 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6l6b5" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="registry-server" containerID="cri-o://e4c4573c563a5edb9a07685cc4b891c802a806cc4b44d487502498f1e57ab7d3" gracePeriod=2 Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.261068 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7655f65-4aae-4328-8800-400aca52aa98" containerID="8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465" exitCode=0 Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.261131 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" event={"ID":"f7655f65-4aae-4328-8800-400aca52aa98","Type":"ContainerDied","Data":"8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465"} Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.262091 4705 scope.go:117] "RemoveContainer" containerID="8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465" Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.266788 4705 generic.go:334] "Generic (PLEG): container finished" podID="219334aa-73c9-49a0-990c-9b9554313fb9" containerID="e4c4573c563a5edb9a07685cc4b891c802a806cc4b44d487502498f1e57ab7d3" exitCode=0 Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.266846 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6b5" event={"ID":"219334aa-73c9-49a0-990c-9b9554313fb9","Type":"ContainerDied","Data":"e4c4573c563a5edb9a07685cc4b891c802a806cc4b44d487502498f1e57ab7d3"} Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.863890 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.913407 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-utilities\") pod \"219334aa-73c9-49a0-990c-9b9554313fb9\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.913470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w5lm\" (UniqueName: \"kubernetes.io/projected/219334aa-73c9-49a0-990c-9b9554313fb9-kube-api-access-5w5lm\") pod \"219334aa-73c9-49a0-990c-9b9554313fb9\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.913559 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-catalog-content\") pod \"219334aa-73c9-49a0-990c-9b9554313fb9\" (UID: \"219334aa-73c9-49a0-990c-9b9554313fb9\") " Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.914661 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-utilities" (OuterVolumeSpecName: "utilities") pod "219334aa-73c9-49a0-990c-9b9554313fb9" (UID: "219334aa-73c9-49a0-990c-9b9554313fb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.923269 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219334aa-73c9-49a0-990c-9b9554313fb9-kube-api-access-5w5lm" (OuterVolumeSpecName: "kube-api-access-5w5lm") pod "219334aa-73c9-49a0-990c-9b9554313fb9" (UID: "219334aa-73c9-49a0-990c-9b9554313fb9"). InnerVolumeSpecName "kube-api-access-5w5lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:53:17 crc kubenswrapper[4705]: I0124 08:53:17.996862 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "219334aa-73c9-49a0-990c-9b9554313fb9" (UID: "219334aa-73c9-49a0-990c-9b9554313fb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.014908 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.014945 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219334aa-73c9-49a0-990c-9b9554313fb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.014960 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w5lm\" (UniqueName: \"kubernetes.io/projected/219334aa-73c9-49a0-990c-9b9554313fb9-kube-api-access-5w5lm\") on node \"crc\" DevicePath \"\"" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.016403 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bsz5_must-gather-r2vbl_f7655f65-4aae-4328-8800-400aca52aa98/gather/0.log" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.288689 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6b5" event={"ID":"219334aa-73c9-49a0-990c-9b9554313fb9","Type":"ContainerDied","Data":"4139d0fb82ae2ccb1a83828d91e3bab09d627fe9cc7c2fc1fb2533a6044e9792"} Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.288738 4705 scope.go:117] "RemoveContainer" containerID="e4c4573c563a5edb9a07685cc4b891c802a806cc4b44d487502498f1e57ab7d3" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.288796 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6b5" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.318191 4705 scope.go:117] "RemoveContainer" containerID="83d4961548abe7bd6c3afea92f9f7158552343f3bf6c574d7ae156428cb7d356" Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.345881 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6l6b5"] Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.359253 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6l6b5"] Jan 24 08:53:18 crc kubenswrapper[4705]: I0124 08:53:18.361167 4705 scope.go:117] "RemoveContainer" containerID="31d3dcd6e47f1946390ea2141770db17ba4df0a6f697cc58b8603feda03a0b09" Jan 24 08:53:19 crc kubenswrapper[4705]: I0124 08:53:19.588999 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" path="/var/lib/kubelet/pods/219334aa-73c9-49a0-990c-9b9554313fb9/volumes" Jan 24 08:53:26 crc kubenswrapper[4705]: I0124 08:53:26.593751 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bsz5/must-gather-r2vbl"] Jan 24 08:53:26 crc kubenswrapper[4705]: I0124 08:53:26.594563 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" podUID="f7655f65-4aae-4328-8800-400aca52aa98" containerName="copy" containerID="cri-o://92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac" gracePeriod=2 Jan 24 08:53:26 crc kubenswrapper[4705]: I0124 08:53:26.607232 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bsz5/must-gather-r2vbl"] Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.163089 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bsz5_must-gather-r2vbl_f7655f65-4aae-4328-8800-400aca52aa98/copy/0.log" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.164317 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.272611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7655f65-4aae-4328-8800-400aca52aa98-must-gather-output\") pod \"f7655f65-4aae-4328-8800-400aca52aa98\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.272979 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md64l\" (UniqueName: \"kubernetes.io/projected/f7655f65-4aae-4328-8800-400aca52aa98-kube-api-access-md64l\") pod \"f7655f65-4aae-4328-8800-400aca52aa98\" (UID: \"f7655f65-4aae-4328-8800-400aca52aa98\") " Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.281520 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7655f65-4aae-4328-8800-400aca52aa98-kube-api-access-md64l" (OuterVolumeSpecName: "kube-api-access-md64l") pod "f7655f65-4aae-4328-8800-400aca52aa98" (UID: "f7655f65-4aae-4328-8800-400aca52aa98"). InnerVolumeSpecName "kube-api-access-md64l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.375317 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md64l\" (UniqueName: \"kubernetes.io/projected/f7655f65-4aae-4328-8800-400aca52aa98-kube-api-access-md64l\") on node \"crc\" DevicePath \"\"" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.390151 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bsz5_must-gather-r2vbl_f7655f65-4aae-4328-8800-400aca52aa98/copy/0.log" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.390834 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7655f65-4aae-4328-8800-400aca52aa98" containerID="92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac" exitCode=143 Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.390905 4705 scope.go:117] "RemoveContainer" containerID="92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.390932 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bsz5/must-gather-r2vbl" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.416994 4705 scope.go:117] "RemoveContainer" containerID="8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.425511 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7655f65-4aae-4328-8800-400aca52aa98-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f7655f65-4aae-4328-8800-400aca52aa98" (UID: "f7655f65-4aae-4328-8800-400aca52aa98"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.480549 4705 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7655f65-4aae-4328-8800-400aca52aa98-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.515318 4705 scope.go:117] "RemoveContainer" containerID="92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac" Jan 24 08:53:27 crc kubenswrapper[4705]: E0124 08:53:27.515924 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac\": container with ID starting with 92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac not found: ID does not exist" containerID="92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.516009 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac"} err="failed to get container status \"92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac\": rpc error: code = NotFound desc = could not find container \"92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac\": container with ID starting with 92f78df6c5da65c35bc436103b3cd82b39221916863130fdcc5c78770ce5c9ac not found: ID does not exist" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.516049 4705 scope.go:117] "RemoveContainer" containerID="8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465" Jan 24 08:53:27 crc kubenswrapper[4705]: E0124 08:53:27.516414 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465\": container with ID starting with 8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465 not found: ID does not exist" containerID="8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.516451 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465"} err="failed to get container status \"8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465\": rpc error: code = NotFound desc = could not find container \"8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465\": container with ID starting with 8f08d061a0573575125a37cd145e84f551f6daee8b603dbe401633c5d64b1465 not found: ID does not exist" Jan 24 08:53:27 crc kubenswrapper[4705]: I0124 08:53:27.626120 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7655f65-4aae-4328-8800-400aca52aa98" path="/var/lib/kubelet/pods/f7655f65-4aae-4328-8800-400aca52aa98/volumes" Jan 24 08:53:37 crc kubenswrapper[4705]: I0124 08:53:37.071561 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:53:37 crc kubenswrapper[4705]: I0124 08:53:37.072336 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.071451 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.072144 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.072251 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.073952 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.074076 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" gracePeriod=600 Jan 24 08:54:07 crc kubenswrapper[4705]: E0124 08:54:07.204019 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.954710 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" exitCode=0 Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.954761 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c"} Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.954833 4705 scope.go:117] "RemoveContainer" containerID="e3c6ce56da70674f16b0913201212d84d13738616df582214bce280d71f18159" Jan 24 08:54:07 crc kubenswrapper[4705]: I0124 08:54:07.955606 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:54:07 crc kubenswrapper[4705]: E0124 08:54:07.955991 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:54:08 crc kubenswrapper[4705]: I0124 08:54:08.884801 4705 scope.go:117] "RemoveContainer" containerID="75265a1153d819ac1e7bddfdce313fd342664579bc7d0d5ec40dd6588a85457d" Jan 24 08:54:22 crc kubenswrapper[4705]: I0124 08:54:22.576960 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:54:22 crc kubenswrapper[4705]: E0124 08:54:22.577805 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:54:36 crc kubenswrapper[4705]: I0124 08:54:36.576169 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:54:36 crc kubenswrapper[4705]: E0124 08:54:36.577340 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:54:50 crc kubenswrapper[4705]: I0124 08:54:50.575635 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:54:50 crc kubenswrapper[4705]: E0124 08:54:50.576325 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.686359 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s72kv"] Jan 24 08:54:53 crc kubenswrapper[4705]: E0124 08:54:53.687514 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="extract-content" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.687535 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="extract-content" Jan 24 08:54:53 crc kubenswrapper[4705]: E0124 08:54:53.687559 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="registry-server" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.687568 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="registry-server" Jan 24 08:54:53 crc kubenswrapper[4705]: E0124 08:54:53.687585 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7655f65-4aae-4328-8800-400aca52aa98" containerName="copy" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.687595 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7655f65-4aae-4328-8800-400aca52aa98" containerName="copy" Jan 24 08:54:53 crc kubenswrapper[4705]: E0124 08:54:53.687631 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7655f65-4aae-4328-8800-400aca52aa98" containerName="gather" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.687640 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7655f65-4aae-4328-8800-400aca52aa98" containerName="gather" Jan 24 08:54:53 crc kubenswrapper[4705]: E0124 08:54:53.687654 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="extract-utilities" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.687663 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="extract-utilities" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.687984 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7655f65-4aae-4328-8800-400aca52aa98" containerName="copy" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.688014 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="219334aa-73c9-49a0-990c-9b9554313fb9" containerName="registry-server" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.688042 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7655f65-4aae-4328-8800-400aca52aa98" containerName="gather" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.691148 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.702347 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s72kv"] Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.736461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-catalog-content\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.736601 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-utilities\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.736750 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hm62\" (UniqueName: \"kubernetes.io/projected/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-kube-api-access-4hm62\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.838840 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-catalog-content\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.838972 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-utilities\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.839090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hm62\" (UniqueName: \"kubernetes.io/projected/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-kube-api-access-4hm62\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.839450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-catalog-content\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.839525 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-utilities\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:53 crc kubenswrapper[4705]: I0124 08:54:53.862043 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hm62\" (UniqueName: \"kubernetes.io/projected/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-kube-api-access-4hm62\") pod \"redhat-marketplace-s72kv\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:54 crc kubenswrapper[4705]: I0124 08:54:54.031422 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:54:54 crc kubenswrapper[4705]: I0124 08:54:54.636242 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s72kv"] Jan 24 08:54:55 crc kubenswrapper[4705]: I0124 08:54:54.998385 4705 generic.go:334] "Generic (PLEG): container finished" podID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerID="5a7505c54c9aaa8b1dfc33a360c8b795c2d2afa1f56d318af6a95d665491be57" exitCode=0 Jan 24 08:54:55 crc kubenswrapper[4705]: I0124 08:54:54.998438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s72kv" event={"ID":"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6","Type":"ContainerDied","Data":"5a7505c54c9aaa8b1dfc33a360c8b795c2d2afa1f56d318af6a95d665491be57"} Jan 24 08:54:55 crc kubenswrapper[4705]: I0124 08:54:54.998470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s72kv" event={"ID":"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6","Type":"ContainerStarted","Data":"f7044ee9614165fee6af7a9cb8febf0d399f522cb0ac7e841b4fd42e7a0dd010"} Jan 24 08:54:56 crc kubenswrapper[4705]: I0124 08:54:56.013086 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s72kv" event={"ID":"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6","Type":"ContainerStarted","Data":"84787afd270660a0f757190ba30cc961b03200d159b63f187799c46c334c2f41"} Jan 24 08:54:57 crc kubenswrapper[4705]: I0124 08:54:57.027917 4705 generic.go:334] "Generic (PLEG): container finished" podID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerID="84787afd270660a0f757190ba30cc961b03200d159b63f187799c46c334c2f41" exitCode=0 Jan 24 08:54:57 crc kubenswrapper[4705]: I0124 08:54:57.028007 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s72kv" event={"ID":"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6","Type":"ContainerDied","Data":"84787afd270660a0f757190ba30cc961b03200d159b63f187799c46c334c2f41"} Jan 24 08:54:58 crc kubenswrapper[4705]: I0124 08:54:58.040453 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s72kv" event={"ID":"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6","Type":"ContainerStarted","Data":"daf29eafc967e408c552517a4156f622ec36e6e383f5c7c06fe0b4900923e99a"} Jan 24 08:54:58 crc kubenswrapper[4705]: I0124 08:54:58.069089 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s72kv" podStartSLOduration=2.531952433 podStartE2EDuration="5.06906507s" podCreationTimestamp="2026-01-24 08:54:53 +0000 UTC" firstStartedPulling="2026-01-24 08:54:55.010958469 +0000 UTC m=+4433.730831757" lastFinishedPulling="2026-01-24 08:54:57.548071106 +0000 UTC m=+4436.267944394" observedRunningTime="2026-01-24 08:54:58.067268529 +0000 UTC m=+4436.787141837" watchObservedRunningTime="2026-01-24 08:54:58.06906507 +0000 UTC m=+4436.788938358" Jan 24 08:55:04 crc kubenswrapper[4705]: I0124 08:55:04.032579 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:55:04 crc kubenswrapper[4705]: I0124 08:55:04.033465 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:55:04 crc kubenswrapper[4705]: I0124 08:55:04.158612 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:55:04 crc kubenswrapper[4705]: I0124 08:55:04.240886 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:55:04 crc kubenswrapper[4705]: I0124 08:55:04.394960 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s72kv"] Jan 24 08:55:04 crc kubenswrapper[4705]: I0124 08:55:04.576019 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:55:04 crc kubenswrapper[4705]: E0124 08:55:04.576321 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:55:06 crc kubenswrapper[4705]: I0124 08:55:06.281797 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s72kv" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="registry-server" containerID="cri-o://daf29eafc967e408c552517a4156f622ec36e6e383f5c7c06fe0b4900923e99a" gracePeriod=2 Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.295138 4705 generic.go:334] "Generic (PLEG): container finished" podID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerID="daf29eafc967e408c552517a4156f622ec36e6e383f5c7c06fe0b4900923e99a" exitCode=0 Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.295203 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s72kv" event={"ID":"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6","Type":"ContainerDied","Data":"daf29eafc967e408c552517a4156f622ec36e6e383f5c7c06fe0b4900923e99a"} Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.402389 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.590731 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-catalog-content\") pod \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.591072 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-utilities\") pod \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.591160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hm62\" (UniqueName: \"kubernetes.io/projected/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-kube-api-access-4hm62\") pod \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\" (UID: \"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6\") " Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.591790 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-utilities" (OuterVolumeSpecName: "utilities") pod "5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" (UID: "5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.596334 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-kube-api-access-4hm62" (OuterVolumeSpecName: "kube-api-access-4hm62") pod "5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" (UID: "5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6"). InnerVolumeSpecName "kube-api-access-4hm62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.614066 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" (UID: "5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.694083 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.694125 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hm62\" (UniqueName: \"kubernetes.io/projected/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-kube-api-access-4hm62\") on node \"crc\" DevicePath \"\"" Jan 24 08:55:07 crc kubenswrapper[4705]: I0124 08:55:07.694140 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:55:08 crc kubenswrapper[4705]: I0124 08:55:08.375525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s72kv" event={"ID":"5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6","Type":"ContainerDied","Data":"f7044ee9614165fee6af7a9cb8febf0d399f522cb0ac7e841b4fd42e7a0dd010"} Jan 24 08:55:08 crc kubenswrapper[4705]: I0124 08:55:08.375806 4705 scope.go:117] "RemoveContainer" containerID="daf29eafc967e408c552517a4156f622ec36e6e383f5c7c06fe0b4900923e99a" Jan 24 08:55:08 crc kubenswrapper[4705]: I0124 08:55:08.375963 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s72kv" Jan 24 08:55:08 crc kubenswrapper[4705]: I0124 08:55:08.398039 4705 scope.go:117] "RemoveContainer" containerID="84787afd270660a0f757190ba30cc961b03200d159b63f187799c46c334c2f41" Jan 24 08:55:08 crc kubenswrapper[4705]: I0124 08:55:08.415573 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s72kv"] Jan 24 08:55:08 crc kubenswrapper[4705]: I0124 08:55:08.425001 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s72kv"] Jan 24 08:55:08 crc kubenswrapper[4705]: I0124 08:55:08.426238 4705 scope.go:117] "RemoveContainer" containerID="5a7505c54c9aaa8b1dfc33a360c8b795c2d2afa1f56d318af6a95d665491be57" Jan 24 08:55:08 crc kubenswrapper[4705]: E0124 08:55:08.575767 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cfa95b0_f651_4f7e_af37_4f97ea5c3fd6.slice/crio-f7044ee9614165fee6af7a9cb8febf0d399f522cb0ac7e841b4fd42e7a0dd010\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cfa95b0_f651_4f7e_af37_4f97ea5c3fd6.slice\": RecentStats: unable to find data in memory cache]" Jan 24 08:55:09 crc kubenswrapper[4705]: I0124 08:55:09.588670 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" path="/var/lib/kubelet/pods/5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6/volumes" Jan 24 08:55:16 crc kubenswrapper[4705]: I0124 08:55:16.576114 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:55:16 crc kubenswrapper[4705]: E0124 08:55:16.576963 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:55:27 crc kubenswrapper[4705]: I0124 08:55:27.576184 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:55:27 crc kubenswrapper[4705]: E0124 08:55:27.576940 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.710427 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rbtvw"] Jan 24 08:55:36 crc kubenswrapper[4705]: E0124 08:55:36.712893 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="extract-content" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.712986 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="extract-content" Jan 24 08:55:36 crc kubenswrapper[4705]: E0124 08:55:36.713053 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="registry-server" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.713114 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="registry-server" Jan 24 08:55:36 crc kubenswrapper[4705]: E0124 08:55:36.713184 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="extract-utilities" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.713241 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="extract-utilities" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.713667 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfa95b0-f651-4f7e-af37-4f97ea5c3fd6" containerName="registry-server" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.715299 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.733856 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rbtvw"] Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.844119 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktmr\" (UniqueName: \"kubernetes.io/projected/fc3a8567-314e-40d0-9c66-4a70464ca782-kube-api-access-lktmr\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.844287 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-catalog-content\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.844412 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-utilities\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.946132 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-utilities\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.946191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lktmr\" (UniqueName: \"kubernetes.io/projected/fc3a8567-314e-40d0-9c66-4a70464ca782-kube-api-access-lktmr\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.946298 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-catalog-content\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.946615 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-utilities\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:36 crc kubenswrapper[4705]: I0124 08:55:36.946880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-catalog-content\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:37 crc kubenswrapper[4705]: I0124 08:55:37.343650 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lktmr\" (UniqueName: \"kubernetes.io/projected/fc3a8567-314e-40d0-9c66-4a70464ca782-kube-api-access-lktmr\") pod \"redhat-operators-rbtvw\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:37 crc kubenswrapper[4705]: I0124 08:55:37.367466 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:55:37 crc kubenswrapper[4705]: I0124 08:55:37.850518 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rbtvw"] Jan 24 08:55:38 crc kubenswrapper[4705]: I0124 08:55:38.785181 4705 generic.go:334] "Generic (PLEG): container finished" podID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerID="1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108" exitCode=0 Jan 24 08:55:38 crc kubenswrapper[4705]: I0124 08:55:38.785361 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbtvw" event={"ID":"fc3a8567-314e-40d0-9c66-4a70464ca782","Type":"ContainerDied","Data":"1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108"} Jan 24 08:55:38 crc kubenswrapper[4705]: I0124 08:55:38.785510 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbtvw" event={"ID":"fc3a8567-314e-40d0-9c66-4a70464ca782","Type":"ContainerStarted","Data":"b479235294d9658d4c0c77006b6e7da79424147995b43bec2fca37a2a7c7526e"} Jan 24 08:55:42 crc kubenswrapper[4705]: I0124 08:55:42.577385 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:55:42 crc kubenswrapper[4705]: E0124 08:55:42.577900 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:55:57 crc kubenswrapper[4705]: I0124 08:55:57.585300 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:55:57 crc kubenswrapper[4705]: E0124 08:55:57.586401 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:56:00 crc kubenswrapper[4705]: I0124 08:56:00.117930 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbtvw" event={"ID":"fc3a8567-314e-40d0-9c66-4a70464ca782","Type":"ContainerStarted","Data":"a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443"} Jan 24 08:56:02 crc kubenswrapper[4705]: I0124 08:56:02.138184 4705 generic.go:334] "Generic (PLEG): container finished" podID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerID="a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443" exitCode=0 Jan 24 08:56:02 crc kubenswrapper[4705]: I0124 08:56:02.138545 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbtvw" event={"ID":"fc3a8567-314e-40d0-9c66-4a70464ca782","Type":"ContainerDied","Data":"a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443"} Jan 24 08:56:03 crc kubenswrapper[4705]: I0124 08:56:03.153293 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbtvw" event={"ID":"fc3a8567-314e-40d0-9c66-4a70464ca782","Type":"ContainerStarted","Data":"acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52"} Jan 24 08:56:03 crc kubenswrapper[4705]: I0124 08:56:03.180242 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rbtvw" podStartSLOduration=3.383075255 podStartE2EDuration="27.180204077s" podCreationTimestamp="2026-01-24 08:55:36 +0000 UTC" firstStartedPulling="2026-01-24 08:55:38.788035649 +0000 UTC m=+4477.507908947" lastFinishedPulling="2026-01-24 08:56:02.585164481 +0000 UTC m=+4501.305037769" observedRunningTime="2026-01-24 08:56:03.175446172 +0000 UTC m=+4501.895319480" watchObservedRunningTime="2026-01-24 08:56:03.180204077 +0000 UTC m=+4501.900077355" Jan 24 08:56:07 crc kubenswrapper[4705]: I0124 08:56:07.368674 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:56:07 crc kubenswrapper[4705]: I0124 08:56:07.369163 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:56:08 crc kubenswrapper[4705]: I0124 08:56:08.419039 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rbtvw" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="registry-server" probeResult="failure" output=< Jan 24 08:56:08 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Jan 24 08:56:08 crc kubenswrapper[4705]: > Jan 24 08:56:11 crc kubenswrapper[4705]: I0124 08:56:11.582682 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:56:11 crc kubenswrapper[4705]: E0124 08:56:11.583644 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:56:17 crc kubenswrapper[4705]: I0124 08:56:17.426255 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:56:17 crc kubenswrapper[4705]: I0124 08:56:17.560361 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:56:17 crc kubenswrapper[4705]: I0124 08:56:17.673801 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rbtvw"] Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.727999 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9rcfs/must-gather-nwqc4"] Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.729883 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.732711 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9rcfs"/"kube-root-ca.crt" Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.733068 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9rcfs"/"openshift-service-ca.crt" Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.754244 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9rcfs/must-gather-nwqc4"] Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.873311 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c59f136-51de-4fe6-95c6-f00cf94c1e02-must-gather-output\") pod \"must-gather-nwqc4\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.873491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6sd9\" (UniqueName: \"kubernetes.io/projected/9c59f136-51de-4fe6-95c6-f00cf94c1e02-kube-api-access-n6sd9\") pod \"must-gather-nwqc4\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.976052 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sd9\" (UniqueName: \"kubernetes.io/projected/9c59f136-51de-4fe6-95c6-f00cf94c1e02-kube-api-access-n6sd9\") pod \"must-gather-nwqc4\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.976197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c59f136-51de-4fe6-95c6-f00cf94c1e02-must-gather-output\") pod \"must-gather-nwqc4\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:18 crc kubenswrapper[4705]: I0124 08:56:18.976678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c59f136-51de-4fe6-95c6-f00cf94c1e02-must-gather-output\") pod \"must-gather-nwqc4\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.007002 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sd9\" (UniqueName: \"kubernetes.io/projected/9c59f136-51de-4fe6-95c6-f00cf94c1e02-kube-api-access-n6sd9\") pod \"must-gather-nwqc4\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.051071 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.322995 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rbtvw" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="registry-server" containerID="cri-o://acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52" gracePeriod=2 Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.593450 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9rcfs/must-gather-nwqc4"] Jan 24 08:56:19 crc kubenswrapper[4705]: W0124 08:56:19.593811 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c59f136_51de_4fe6_95c6_f00cf94c1e02.slice/crio-7c8d455b02018ba10d1a0f84784bfee9502c873a3f6322ad875c778b499b3486 WatchSource:0}: Error finding container 7c8d455b02018ba10d1a0f84784bfee9502c873a3f6322ad875c778b499b3486: Status 404 returned error can't find the container with id 7c8d455b02018ba10d1a0f84784bfee9502c873a3f6322ad875c778b499b3486 Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.746590 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.893619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-catalog-content\") pod \"fc3a8567-314e-40d0-9c66-4a70464ca782\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.894054 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lktmr\" (UniqueName: \"kubernetes.io/projected/fc3a8567-314e-40d0-9c66-4a70464ca782-kube-api-access-lktmr\") pod \"fc3a8567-314e-40d0-9c66-4a70464ca782\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.894090 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-utilities\") pod \"fc3a8567-314e-40d0-9c66-4a70464ca782\" (UID: \"fc3a8567-314e-40d0-9c66-4a70464ca782\") " Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.894983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-utilities" (OuterVolumeSpecName: "utilities") pod "fc3a8567-314e-40d0-9c66-4a70464ca782" (UID: "fc3a8567-314e-40d0-9c66-4a70464ca782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.896348 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:56:19 crc kubenswrapper[4705]: I0124 08:56:19.902978 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3a8567-314e-40d0-9c66-4a70464ca782-kube-api-access-lktmr" (OuterVolumeSpecName: "kube-api-access-lktmr") pod "fc3a8567-314e-40d0-9c66-4a70464ca782" (UID: "fc3a8567-314e-40d0-9c66-4a70464ca782"). InnerVolumeSpecName "kube-api-access-lktmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.000512 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lktmr\" (UniqueName: \"kubernetes.io/projected/fc3a8567-314e-40d0-9c66-4a70464ca782-kube-api-access-lktmr\") on node \"crc\" DevicePath \"\"" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.030190 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc3a8567-314e-40d0-9c66-4a70464ca782" (UID: "fc3a8567-314e-40d0-9c66-4a70464ca782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.103153 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3a8567-314e-40d0-9c66-4a70464ca782-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.335106 4705 generic.go:334] "Generic (PLEG): container finished" podID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerID="acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52" exitCode=0 Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.335150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbtvw" event={"ID":"fc3a8567-314e-40d0-9c66-4a70464ca782","Type":"ContainerDied","Data":"acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52"} Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.335519 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbtvw" event={"ID":"fc3a8567-314e-40d0-9c66-4a70464ca782","Type":"ContainerDied","Data":"b479235294d9658d4c0c77006b6e7da79424147995b43bec2fca37a2a7c7526e"} Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.335539 4705 scope.go:117] "RemoveContainer" containerID="acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.335221 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbtvw" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.338187 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" event={"ID":"9c59f136-51de-4fe6-95c6-f00cf94c1e02","Type":"ContainerStarted","Data":"c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11"} Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.338226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" event={"ID":"9c59f136-51de-4fe6-95c6-f00cf94c1e02","Type":"ContainerStarted","Data":"4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e"} Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.338237 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" event={"ID":"9c59f136-51de-4fe6-95c6-f00cf94c1e02","Type":"ContainerStarted","Data":"7c8d455b02018ba10d1a0f84784bfee9502c873a3f6322ad875c778b499b3486"} Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.358676 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" podStartSLOduration=2.358649746 podStartE2EDuration="2.358649746s" podCreationTimestamp="2026-01-24 08:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:56:20.354763916 +0000 UTC m=+4519.074637214" watchObservedRunningTime="2026-01-24 08:56:20.358649746 +0000 UTC m=+4519.078523034" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.362492 4705 scope.go:117] "RemoveContainer" containerID="a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.387585 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rbtvw"] Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.396323 4705 scope.go:117] "RemoveContainer" containerID="1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.400056 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rbtvw"] Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.446465 4705 scope.go:117] "RemoveContainer" containerID="acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52" Jan 24 08:56:20 crc kubenswrapper[4705]: E0124 08:56:20.446960 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52\": container with ID starting with acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52 not found: ID does not exist" containerID="acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.446994 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52"} err="failed to get container status \"acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52\": rpc error: code = NotFound desc = could not find container \"acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52\": container with ID starting with acdb889ed351941c6baf485c234077400f7bc71f7e13b7332aed1283041e0b52 not found: ID does not exist" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.447017 4705 scope.go:117] "RemoveContainer" containerID="a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443" Jan 24 08:56:20 crc kubenswrapper[4705]: E0124 08:56:20.447221 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443\": container with ID starting with a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443 not found: ID does not exist" containerID="a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.447242 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443"} err="failed to get container status \"a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443\": rpc error: code = NotFound desc = could not find container \"a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443\": container with ID starting with a8c643fd72c2909555011d5ef48b25f2666f8c418922cc70603a27cda9d61443 not found: ID does not exist" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.447253 4705 scope.go:117] "RemoveContainer" containerID="1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108" Jan 24 08:56:20 crc kubenswrapper[4705]: E0124 08:56:20.447409 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108\": container with ID starting with 1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108 not found: ID does not exist" containerID="1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108" Jan 24 08:56:20 crc kubenswrapper[4705]: I0124 08:56:20.447426 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108"} err="failed to get container status \"1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108\": rpc error: code = NotFound desc = could not find container \"1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108\": container with ID starting with 1530700d7c7e5ebf7d567f1e7012ebad054e727437cb3bd0b88bec3c44a31108 not found: ID does not exist" Jan 24 08:56:21 crc kubenswrapper[4705]: I0124 08:56:21.594470 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" path="/var/lib/kubelet/pods/fc3a8567-314e-40d0-9c66-4a70464ca782/volumes" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.581449 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:56:23 crc kubenswrapper[4705]: E0124 08:56:23.582218 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.687360 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9rcfs/crc-debug-djrfg"] Jan 24 08:56:23 crc kubenswrapper[4705]: E0124 08:56:23.688142 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="extract-utilities" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.688162 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="extract-utilities" Jan 24 08:56:23 crc kubenswrapper[4705]: E0124 08:56:23.688183 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="extract-content" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.688190 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="extract-content" Jan 24 08:56:23 crc kubenswrapper[4705]: E0124 08:56:23.688199 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="registry-server" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.688205 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="registry-server" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.688509 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3a8567-314e-40d0-9c66-4a70464ca782" containerName="registry-server" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.689186 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.691462 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9rcfs"/"default-dockercfg-cdd8b" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.696382 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-host\") pod \"crc-debug-djrfg\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.696478 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44k7\" (UniqueName: \"kubernetes.io/projected/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-kube-api-access-h44k7\") pod \"crc-debug-djrfg\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.798329 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44k7\" (UniqueName: \"kubernetes.io/projected/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-kube-api-access-h44k7\") pod \"crc-debug-djrfg\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.798539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-host\") pod \"crc-debug-djrfg\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.798671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-host\") pod \"crc-debug-djrfg\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:23 crc kubenswrapper[4705]: I0124 08:56:23.815636 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44k7\" (UniqueName: \"kubernetes.io/projected/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-kube-api-access-h44k7\") pod \"crc-debug-djrfg\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:24 crc kubenswrapper[4705]: I0124 08:56:24.019928 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:24 crc kubenswrapper[4705]: W0124 08:56:24.072461 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f4de95_b86e_42ad_b1fd_fba12ffdb01f.slice/crio-8b546c16fc0a544ec3f695b8bb55d26be5c036c30593da6532e1c715102c4e93 WatchSource:0}: Error finding container 8b546c16fc0a544ec3f695b8bb55d26be5c036c30593da6532e1c715102c4e93: Status 404 returned error can't find the container with id 8b546c16fc0a544ec3f695b8bb55d26be5c036c30593da6532e1c715102c4e93 Jan 24 08:56:24 crc kubenswrapper[4705]: I0124 08:56:24.377131 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" event={"ID":"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f","Type":"ContainerStarted","Data":"8b546c16fc0a544ec3f695b8bb55d26be5c036c30593da6532e1c715102c4e93"} Jan 24 08:56:25 crc kubenswrapper[4705]: I0124 08:56:25.401552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" event={"ID":"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f","Type":"ContainerStarted","Data":"7ff3489f8be8d5973f54e10a74a71ddda30b10bedad523dd31a760cdd4125981"} Jan 24 08:56:25 crc kubenswrapper[4705]: I0124 08:56:25.417935 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" podStartSLOduration=2.417907095 podStartE2EDuration="2.417907095s" podCreationTimestamp="2026-01-24 08:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:56:25.414259742 +0000 UTC m=+4524.134133050" watchObservedRunningTime="2026-01-24 08:56:25.417907095 +0000 UTC m=+4524.137780403" Jan 24 08:56:36 crc kubenswrapper[4705]: I0124 08:56:36.500296 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1f4de95-b86e-42ad-b1fd-fba12ffdb01f" containerID="7ff3489f8be8d5973f54e10a74a71ddda30b10bedad523dd31a760cdd4125981" exitCode=0 Jan 24 08:56:36 crc kubenswrapper[4705]: I0124 08:56:36.500397 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" event={"ID":"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f","Type":"ContainerDied","Data":"7ff3489f8be8d5973f54e10a74a71ddda30b10bedad523dd31a760cdd4125981"} Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.582812 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:56:37 crc kubenswrapper[4705]: E0124 08:56:37.583302 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.690931 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.880915 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9rcfs/crc-debug-djrfg"] Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.894715 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9rcfs/crc-debug-djrfg"] Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.895618 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-host\") pod \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.895739 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h44k7\" (UniqueName: \"kubernetes.io/projected/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-kube-api-access-h44k7\") pod \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\" (UID: \"c1f4de95-b86e-42ad-b1fd-fba12ffdb01f\") " Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.895846 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-host" (OuterVolumeSpecName: "host") pod "c1f4de95-b86e-42ad-b1fd-fba12ffdb01f" (UID: "c1f4de95-b86e-42ad-b1fd-fba12ffdb01f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:56:37 crc kubenswrapper[4705]: I0124 08:56:37.896197 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-host\") on node \"crc\" DevicePath \"\"" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.047209 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9rcfs/crc-debug-qj99f"] Jan 24 08:56:39 crc kubenswrapper[4705]: E0124 08:56:39.047607 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f4de95-b86e-42ad-b1fd-fba12ffdb01f" containerName="container-00" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.047619 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f4de95-b86e-42ad-b1fd-fba12ffdb01f" containerName="container-00" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.047843 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f4de95-b86e-42ad-b1fd-fba12ffdb01f" containerName="container-00" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.048480 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.704007 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-host\") pod \"crc-debug-qj99f\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.704363 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8hg\" (UniqueName: \"kubernetes.io/projected/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-kube-api-access-cn8hg\") pod \"crc-debug-qj99f\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.805861 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-host\") pod \"crc-debug-qj99f\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.805944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8hg\" (UniqueName: \"kubernetes.io/projected/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-kube-api-access-cn8hg\") pod \"crc-debug-qj99f\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.806329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-host\") pod \"crc-debug-qj99f\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.825942 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8hg\" (UniqueName: \"kubernetes.io/projected/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-kube-api-access-cn8hg\") pod \"crc-debug-qj99f\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.894805 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-kube-api-access-h44k7" (OuterVolumeSpecName: "kube-api-access-h44k7") pod "c1f4de95-b86e-42ad-b1fd-fba12ffdb01f" (UID: "c1f4de95-b86e-42ad-b1fd-fba12ffdb01f"). InnerVolumeSpecName "kube-api-access-h44k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.920768 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h44k7\" (UniqueName: \"kubernetes.io/projected/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f-kube-api-access-h44k7\") on node \"crc\" DevicePath \"\"" Jan 24 08:56:39 crc kubenswrapper[4705]: I0124 08:56:39.940958 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f4de95-b86e-42ad-b1fd-fba12ffdb01f" path="/var/lib/kubelet/pods/c1f4de95-b86e-42ad-b1fd-fba12ffdb01f/volumes" Jan 24 08:56:40 crc kubenswrapper[4705]: I0124 08:56:39.998833 4705 scope.go:117] "RemoveContainer" containerID="7ff3489f8be8d5973f54e10a74a71ddda30b10bedad523dd31a760cdd4125981" Jan 24 08:56:40 crc kubenswrapper[4705]: I0124 08:56:39.998977 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-djrfg" Jan 24 08:56:40 crc kubenswrapper[4705]: I0124 08:56:40.073929 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:41 crc kubenswrapper[4705]: I0124 08:56:41.035462 4705 generic.go:334] "Generic (PLEG): container finished" podID="b6b64b97-cf32-4956-b3a5-80389e6d1ea5" containerID="2cab15571195f161211ec5c6a36e8f17ba9e887003e136a8a9cee20335ef63ca" exitCode=1 Jan 24 08:56:41 crc kubenswrapper[4705]: I0124 08:56:41.035511 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/crc-debug-qj99f" event={"ID":"b6b64b97-cf32-4956-b3a5-80389e6d1ea5","Type":"ContainerDied","Data":"2cab15571195f161211ec5c6a36e8f17ba9e887003e136a8a9cee20335ef63ca"} Jan 24 08:56:41 crc kubenswrapper[4705]: I0124 08:56:41.035540 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/crc-debug-qj99f" event={"ID":"b6b64b97-cf32-4956-b3a5-80389e6d1ea5","Type":"ContainerStarted","Data":"f332339bec0099097d6c8b6590e05970bc022735f9ba408c23b7b61f3fb387b5"} Jan 24 08:56:41 crc kubenswrapper[4705]: I0124 08:56:41.079767 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9rcfs/crc-debug-qj99f"] Jan 24 08:56:41 crc kubenswrapper[4705]: I0124 08:56:41.088556 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9rcfs/crc-debug-qj99f"] Jan 24 08:56:42 crc kubenswrapper[4705]: I0124 08:56:42.419576 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:42 crc kubenswrapper[4705]: I0124 08:56:42.594577 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-host\") pod \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " Jan 24 08:56:42 crc kubenswrapper[4705]: I0124 08:56:42.594754 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-host" (OuterVolumeSpecName: "host") pod "b6b64b97-cf32-4956-b3a5-80389e6d1ea5" (UID: "b6b64b97-cf32-4956-b3a5-80389e6d1ea5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 08:56:42 crc kubenswrapper[4705]: I0124 08:56:42.594905 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn8hg\" (UniqueName: \"kubernetes.io/projected/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-kube-api-access-cn8hg\") pod \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\" (UID: \"b6b64b97-cf32-4956-b3a5-80389e6d1ea5\") " Jan 24 08:56:42 crc kubenswrapper[4705]: I0124 08:56:42.595409 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-host\") on node \"crc\" DevicePath \"\"" Jan 24 08:56:42 crc kubenswrapper[4705]: I0124 08:56:42.608212 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-kube-api-access-cn8hg" (OuterVolumeSpecName: "kube-api-access-cn8hg") pod "b6b64b97-cf32-4956-b3a5-80389e6d1ea5" (UID: "b6b64b97-cf32-4956-b3a5-80389e6d1ea5"). InnerVolumeSpecName "kube-api-access-cn8hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:56:42 crc kubenswrapper[4705]: I0124 08:56:42.697137 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn8hg\" (UniqueName: \"kubernetes.io/projected/b6b64b97-cf32-4956-b3a5-80389e6d1ea5-kube-api-access-cn8hg\") on node \"crc\" DevicePath \"\"" Jan 24 08:56:43 crc kubenswrapper[4705]: I0124 08:56:43.059917 4705 scope.go:117] "RemoveContainer" containerID="2cab15571195f161211ec5c6a36e8f17ba9e887003e136a8a9cee20335ef63ca" Jan 24 08:56:43 crc kubenswrapper[4705]: I0124 08:56:43.060115 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/crc-debug-qj99f" Jan 24 08:56:43 crc kubenswrapper[4705]: I0124 08:56:43.588766 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b64b97-cf32-4956-b3a5-80389e6d1ea5" path="/var/lib/kubelet/pods/b6b64b97-cf32-4956-b3a5-80389e6d1ea5/volumes" Jan 24 08:56:48 crc kubenswrapper[4705]: I0124 08:56:48.576857 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:56:48 crc kubenswrapper[4705]: E0124 08:56:48.577663 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:57:03 crc kubenswrapper[4705]: I0124 08:57:03.576382 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:57:03 crc kubenswrapper[4705]: E0124 08:57:03.577106 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:57:14 crc kubenswrapper[4705]: I0124 08:57:14.575761 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:57:14 crc kubenswrapper[4705]: E0124 08:57:14.576436 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:57:28 crc kubenswrapper[4705]: I0124 08:57:28.576857 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:57:28 crc kubenswrapper[4705]: E0124 08:57:28.577924 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:57:43 crc kubenswrapper[4705]: I0124 08:57:43.576306 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:57:43 crc kubenswrapper[4705]: E0124 08:57:43.577198 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:57:56 crc kubenswrapper[4705]: I0124 08:57:56.575425 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:57:56 crc kubenswrapper[4705]: E0124 08:57:56.576222 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:06.999863 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/init-config-reloader/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.170539 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/alertmanager/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.206041 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/init-config-reloader/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.243193 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_2acfda0f-e35f-4215-8f97-dbb885b75b34/config-reloader/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.371275 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-api/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.404840 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-listener/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.448905 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-evaluator/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.585887 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_95e14985-fae8-4e28-91fe-4234d31f3f33/aodh-notifier/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.619981 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f654f7bb-65t7w_9cefd3d6-3762-41d6-adc7-31134fde2bb7/barbican-api/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.674937 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f654f7bb-65t7w_9cefd3d6-3762-41d6-adc7-31134fde2bb7/barbican-api-log/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.807996 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dbf49b754-xk8bz_85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e/barbican-keystone-listener/0.log" Jan 24 08:58:07 crc kubenswrapper[4705]: I0124 08:58:07.851310 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dbf49b754-xk8bz_85ab9eae-d7e1-4ce4-a15f-68a2a9aaba1e/barbican-keystone-listener-log/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.024794 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6698559bb9-vn9c8_6328de33-ec5c-402a-aece-9b944c259b59/barbican-worker/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.090423 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6698559bb9-vn9c8_6328de33-ec5c-402a-aece-9b944c259b59/barbican-worker-log/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.151720 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8rw7t_67b8ef17-3a9a-4ebc-af02-eb475e2304af/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.305412 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/ceilometer-central-agent/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.335940 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/ceilometer-notification-agent/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.387752 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/proxy-httpd/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.466074 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7d641fc4-49a3-4686-9839-730afa8afd5d/sg-core/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.559331 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6b3b0e00-82d8-4096-80b1-a9edffb3cdaf/cinder-api-log/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.575482 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:58:08 crc kubenswrapper[4705]: E0124 08:58:08.575763 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.603605 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6b3b0e00-82d8-4096-80b1-a9edffb3cdaf/cinder-api/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.751129 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_76eadf8b-3ddc-461f-b8d6-87978146e077/cinder-scheduler/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.840062 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_76eadf8b-3ddc-461f-b8d6-87978146e077/probe/0.log" Jan 24 08:58:08 crc kubenswrapper[4705]: I0124 08:58:08.992730 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pmqzj_f8d0eb2c-3a7c-4b3d-9cb7-b2c46beec516/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:09 crc kubenswrapper[4705]: I0124 08:58:09.055284 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xsvmx_b3f04082-08d1-49ca-91fd-b538d81a8923/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:09 crc kubenswrapper[4705]: I0124 08:58:09.198526 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-ks5p8_333dd8c4-e753-48ab-be34-640378c23251/init/0.log" Jan 24 08:58:09 crc kubenswrapper[4705]: I0124 08:58:09.727200 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-ks5p8_333dd8c4-e753-48ab-be34-640378c23251/init/0.log" Jan 24 08:58:09 crc kubenswrapper[4705]: I0124 08:58:09.771192 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-ks5p8_333dd8c4-e753-48ab-be34-640378c23251/dnsmasq-dns/0.log" Jan 24 08:58:09 crc kubenswrapper[4705]: I0124 08:58:09.802750 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-f5w7d_f92be2f8-1ff3-4237-8046-ff1352af1bef/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:09 crc kubenswrapper[4705]: I0124 08:58:09.974432 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_881a6a33-1c19-4868-b1d8-ff8efde83513/glance-httpd/0.log" Jan 24 08:58:10 crc kubenswrapper[4705]: I0124 08:58:10.011502 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_881a6a33-1c19-4868-b1d8-ff8efde83513/glance-log/0.log" Jan 24 08:58:10 crc kubenswrapper[4705]: I0124 08:58:10.140626 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2ddfb089-24fd-436d-9f98-df7b3933d5f1/glance-log/0.log" Jan 24 08:58:10 crc kubenswrapper[4705]: I0124 08:58:10.186749 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2ddfb089-24fd-436d-9f98-df7b3933d5f1/glance-httpd/0.log" Jan 24 08:58:10 crc kubenswrapper[4705]: I0124 08:58:10.717799 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7798c79c68-jdzb7_d28ccc81-d764-4810-a649-42ff56ae43c8/heat-engine/0.log" Jan 24 08:58:10 crc kubenswrapper[4705]: I0124 08:58:10.727942 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6bbd698cdd-bj25j_bb805121-ae65-457e-877f-2db0ae5e61dc/heat-api/0.log" Jan 24 08:58:10 crc kubenswrapper[4705]: I0124 08:58:10.998283 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-67559d7f8-s8rzr_13afed15-05ec-4ae6-a29f-5c9226770a19/heat-cfnapi/0.log" Jan 24 08:58:11 crc kubenswrapper[4705]: I0124 08:58:11.915986 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-565zf_69797704-2611-4e94-8321-878049b18d9e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:11 crc kubenswrapper[4705]: I0124 08:58:11.926844 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-5thzw_751e9e62-9148-48d5-9630-158d42b6b78d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:11 crc kubenswrapper[4705]: I0124 08:58:11.983933 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5c7b7bd5d5-rdcz5_d5f2747d-33dc-4eb9-89d2-2c6f2907d4e2/keystone-api/0.log" Jan 24 08:58:12 crc kubenswrapper[4705]: I0124 08:58:12.125837 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b72fb68d-e944-4d99-b1d0-eb097c807e14/kube-state-metrics/0.log" Jan 24 08:58:12 crc kubenswrapper[4705]: I0124 08:58:12.209905 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-n4ljs_892c0147-b4a3-451d-9c4c-c2a0cb3cf56e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:12 crc kubenswrapper[4705]: I0124 08:58:12.430682 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcd878cb5-xnt7l_def20def-8ec8-4bb9-9c58-c557b1610ae9/neutron-httpd/0.log" Jan 24 08:58:12 crc kubenswrapper[4705]: I0124 08:58:12.436573 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcd878cb5-xnt7l_def20def-8ec8-4bb9-9c58-c557b1610ae9/neutron-api/0.log" Jan 24 08:58:12 crc kubenswrapper[4705]: I0124 08:58:12.661661 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wj2hz_3f03fcbf-053b-4f3b-b96a-e7f325f36a0a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:12 crc kubenswrapper[4705]: I0124 08:58:12.881682 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bdffe46c-ac47-422d-aec3-896fa1575ca7/nova-api-log/0.log" Jan 24 08:58:13 crc kubenswrapper[4705]: I0124 08:58:13.042555 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7e941386-bff0-4fd5-a452-0f659b35eae9/nova-cell0-conductor-conductor/0.log" Jan 24 08:58:13 crc kubenswrapper[4705]: I0124 08:58:13.328111 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0d7a0724-6bfc-440e-958b-28313c59010d/nova-cell1-conductor-conductor/0.log" Jan 24 08:58:13 crc kubenswrapper[4705]: I0124 08:58:13.395014 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bdffe46c-ac47-422d-aec3-896fa1575ca7/nova-api-api/0.log" Jan 24 08:58:13 crc kubenswrapper[4705]: I0124 08:58:13.453060 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_90e3deeb-1218-4c9b-9e33-3e720ca605bc/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 08:58:13 crc kubenswrapper[4705]: I0124 08:58:13.559206 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rmhql_392633fe-e467-4669-9773-89b44ed68ac6/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:13 crc kubenswrapper[4705]: I0124 08:58:13.798708 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbcacec9-3f9e-488c-846b-708af727b753/nova-metadata-log/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.082374 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba8d3653-1ade-4d27-a7fa-06e616ffe7f2/nova-scheduler-scheduler/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.091395 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b/mysql-bootstrap/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.298618 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b/mysql-bootstrap/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.318592 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9a8b1f7b-5d6f-4b6b-aaaa-a6c54637254b/galera/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.465131 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_95a51efd-0ac3-4c02-8052-5b4017444820/mysql-bootstrap/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.703009 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_95a51efd-0ac3-4c02-8052-5b4017444820/galera/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.726775 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_95a51efd-0ac3-4c02-8052-5b4017444820/mysql-bootstrap/0.log" Jan 24 08:58:14 crc kubenswrapper[4705]: I0124 08:58:14.982262 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dqhqz_e50e3aa7-48d0-4559-9f09-f0a9a54232a7/ovn-controller/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.011891 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5bf2f8d1-1a23-4328-9169-1dea01964d94/openstackclient/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.217275 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hp2mc_a80046d0-b499-49e8-98aa-78869a5f0482/openstack-network-exporter/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.379075 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbcacec9-3f9e-488c-846b-708af727b753/nova-metadata-metadata/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.427053 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovsdb-server-init/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.633269 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovs-vswitchd/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.636106 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovsdb-server-init/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.665466 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-llw8s_1972dfce-f49c-481e-a252-f1c8ad52ecc5/ovsdb-server/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.901492 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89e4ed86-cfcf-457e-bca5-29d0001a7785/openstack-network-exporter/0.log" Jan 24 08:58:15 crc kubenswrapper[4705]: I0124 08:58:15.930707 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bfqnl_1dfe7e12-9632-436a-b440-02c0f710ca04/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.093335 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89e4ed86-cfcf-457e-bca5-29d0001a7785/ovn-northd/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.266563 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ac239835-9243-4353-8ca5-ff79405c5009/openstack-network-exporter/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.326843 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ac239835-9243-4353-8ca5-ff79405c5009/ovsdbserver-nb/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.453749 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6/openstack-network-exporter/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.528537 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d12964f-7c7d-48bc-8cc0-8c4b5e7ea8f6/ovsdbserver-sb/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.729725 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55877bd6d-swpx2_e650ce3a-8142-469f-bb17-116626c2141b/placement-api/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.737806 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55877bd6d-swpx2_e650ce3a-8142-469f-bb17-116626c2141b/placement-log/0.log" Jan 24 08:58:16 crc kubenswrapper[4705]: I0124 08:58:16.841030 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/init-config-reloader/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.033751 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/config-reloader/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.034633 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/prometheus/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.071615 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/init-config-reloader/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.081492 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3aa939bc-2a0f-4610-a5c5-62043aa52bdf/thanos-sidecar/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.306547 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42a4eca6-7e02-48d4-a187-ea503285c378/setup-container/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.484929 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42a4eca6-7e02-48d4-a187-ea503285c378/setup-container/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.546155 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42a4eca6-7e02-48d4-a187-ea503285c378/rabbitmq/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.551682 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_203f66be-7cf6-4664-a0a8-9ed975352414/setup-container/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.808867 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_203f66be-7cf6-4664-a0a8-9ed975352414/setup-container/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.834386 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_203f66be-7cf6-4664-a0a8-9ed975352414/rabbitmq/0.log" Jan 24 08:58:17 crc kubenswrapper[4705]: I0124 08:58:17.847063 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-cjsmd_a0eb2e96-4e56-4c71-a977-7b27892ba77c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.056084 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xfn7j_3579a044-5429-43aa-be25-6720cbb84d82/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.056227 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2l7pr_2b3c2835-0838-4592-ae5c-9d442ad0e351/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.324994 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xdc7k_8c2d1fb0-3187-4f07-bc44-d3c81689b09e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.365256 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pqr72_23b20cce-9e55-4a5e-b3ba-72526a662b7d/ssh-known-hosts-edpm-deployment/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.636421 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58599c4547-sbsm4_1cce5e47-bb96-4468-8818-29869d013b7b/proxy-server/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.732468 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wvgxh_71ae95bb-0592-4ebd-b74a-c2ed2cc5654e/swift-ring-rebalance/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.749288 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58599c4547-sbsm4_1cce5e47-bb96-4468-8818-29869d013b7b/proxy-httpd/0.log" Jan 24 08:58:18 crc kubenswrapper[4705]: I0124 08:58:18.916040 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-auditor/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.341317 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-reaper/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.373997 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-replicator/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.385216 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/account-server/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.399793 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-auditor/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.543901 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-server/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.575887 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:58:19 crc kubenswrapper[4705]: E0124 08:58:19.576412 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.619605 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-updater/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.646176 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/container-replicator/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.655116 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-auditor/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.793844 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-expirer/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.809800 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-replicator/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.871076 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-server/0.log" Jan 24 08:58:19 crc kubenswrapper[4705]: I0124 08:58:19.896151 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/object-updater/0.log" Jan 24 08:58:20 crc kubenswrapper[4705]: I0124 08:58:20.015057 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/rsync/0.log" Jan 24 08:58:20 crc kubenswrapper[4705]: I0124 08:58:20.029292 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2521bbad-8785-4fbf-94fe-7309e9fe3442/swift-recon-cron/0.log" Jan 24 08:58:20 crc kubenswrapper[4705]: I0124 08:58:20.167230 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7hx5p_ca587b10-b782-4dd1-a3fa-e9dfd773a2e3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:20 crc kubenswrapper[4705]: I0124 08:58:20.288087 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-z4xps_e6982665-cdec-4e7c-b9d1-0c7532cf8830/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 08:58:29 crc kubenswrapper[4705]: I0124 08:58:29.228671 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ec9c2213-448d-4532-b6a6-3f6242f5ab5f/memcached/0.log" Jan 24 08:58:31 crc kubenswrapper[4705]: I0124 08:58:31.582803 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:58:31 crc kubenswrapper[4705]: E0124 08:58:31.583678 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:58:42 crc kubenswrapper[4705]: I0124 08:58:42.575884 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:58:42 crc kubenswrapper[4705]: E0124 08:58:42.576778 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.068665 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/util/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.284752 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/util/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.310724 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/pull/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.322889 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/pull/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.491166 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/util/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.506530 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/pull/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.573788 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3fbc58f8dcdcf901cde3b0a270b2b9b5c4672a4adbb2d653c412874ab2lhvs5_ebc0e3ca-4a5f-4ae7-af73-92edccb58a67/extract/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.762791 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-r5j5v_652fc521-e0f0-4d0c-8ca3-8077222ab892/manager/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.807169 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-nf4zc_0a119afa-9520-46bc-8fde-0b2974035e48/manager/0.log" Jan 24 08:58:51 crc kubenswrapper[4705]: I0124 08:58:51.927219 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-dbvkx_91182c35-90b8-409a-ac96-191c754f5c9d/manager/0.log" Jan 24 08:58:52 crc kubenswrapper[4705]: I0124 08:58:52.199917 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-xsz7p_93151962-475c-412e-98d3-7363d8fd5f6c/manager/0.log" Jan 24 08:58:52 crc kubenswrapper[4705]: I0124 08:58:52.416289 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sj4dw_be549f5c-a477-4e7d-a928-0e9885ffa225/manager/0.log" Jan 24 08:58:52 crc kubenswrapper[4705]: I0124 08:58:52.433323 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-sjc8r_338f4812-65cb-4a3e-a83e-73a72e4f31eb/manager/0.log" Jan 24 08:58:52 crc kubenswrapper[4705]: I0124 08:58:52.695516 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-v629x_241de282-17c7-48c1-b4cb-fbeb9b98bd08/manager/0.log" Jan 24 08:58:52 crc kubenswrapper[4705]: I0124 08:58:52.817837 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-l4fkg_bef91cd6-2f77-474f-8258-e23ca5b37091/manager/0.log" Jan 24 08:58:52 crc kubenswrapper[4705]: I0124 08:58:52.886611 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-tgzdf_23f7495d-06eb-45e5-b5e6-e50169760b0b/manager/0.log" Jan 24 08:58:52 crc kubenswrapper[4705]: I0124 08:58:52.930349 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-s86rp_fd042d2d-7b0f-4ffd-b9fa-8d78a0be9e9e/manager/0.log" Jan 24 08:58:53 crc kubenswrapper[4705]: I0124 08:58:53.115695 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-x5h78_49e04e6c-68bd-4ebb-a6b8-8c728b7e3ead/manager/0.log" Jan 24 08:58:53 crc kubenswrapper[4705]: I0124 08:58:53.191044 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-mpgjf_bf85561a-7710-4a15-b4b1-c8f48e50dc53/manager/0.log" Jan 24 08:58:53 crc kubenswrapper[4705]: I0124 08:58:53.369551 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-drzkh_b14e84b5-9dcb-4280-9480-a6f34bf8c8dd/manager/0.log" Jan 24 08:58:53 crc kubenswrapper[4705]: I0124 08:58:53.405238 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-869gl_3c52d864-16a1-4eb6-80e9-ac7e5009bbd9/manager/0.log" Jan 24 08:58:53 crc kubenswrapper[4705]: I0124 08:58:53.553826 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854mn7xz_5a7f4747-1fd9-4aa3-b954-e32101ebe927/manager/0.log" Jan 24 08:58:53 crc kubenswrapper[4705]: I0124 08:58:53.701676 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5f778d85fb-56s2d_1c03ba2e-ee1e-4afc-8f97-84439ceec36d/operator/0.log" Jan 24 08:58:53 crc kubenswrapper[4705]: I0124 08:58:53.968222 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fnqzn_508301de-d491-4dbb-9f4b-c2732d5007eb/registry-server/0.log" Jan 24 08:58:54 crc kubenswrapper[4705]: I0124 08:58:54.086428 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6445j_2e973d30-3868-4922-b576-12587d46810a/manager/0.log" Jan 24 08:58:54 crc kubenswrapper[4705]: I0124 08:58:54.196908 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-k8q6j_eb05abb5-cee5-4e0d-9217-6154aebe5836/manager/0.log" Jan 24 08:58:54 crc kubenswrapper[4705]: I0124 08:58:54.473286 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-d4kz9_3fac85c2-ff36-44a9-ae92-947df3332178/operator/0.log" Jan 24 08:58:54 crc kubenswrapper[4705]: I0124 08:58:54.670262 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-f2vcw_404be92b-a12e-42d7-868f-adf825bc7c68/manager/0.log" Jan 24 08:58:54 crc kubenswrapper[4705]: I0124 08:58:54.913746 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-c6x57_a08c7b5c-356a-4a05-a600-82f6bf5aad91/manager/0.log" Jan 24 08:58:54 crc kubenswrapper[4705]: I0124 08:58:54.982949 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7c64596589-v9zxl_f5382856-3a6e-4d10-beb2-9df688e2f6c7/manager/0.log" Jan 24 08:58:55 crc kubenswrapper[4705]: I0124 08:58:55.102966 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-5nngs_0ab0b790-0cf4-453a-9c4b-97a6ddf2bd36/manager/0.log" Jan 24 08:58:55 crc kubenswrapper[4705]: I0124 08:58:55.126937 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8d6967975-rkwgg_51813a09-552b-4f12-904a-840cf6829c80/manager/0.log" Jan 24 08:58:55 crc kubenswrapper[4705]: I0124 08:58:55.576737 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:58:55 crc kubenswrapper[4705]: E0124 08:58:55.577134 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 08:59:08 crc kubenswrapper[4705]: I0124 08:59:08.575445 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 08:59:08 crc kubenswrapper[4705]: I0124 08:59:08.824621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"81a244c9bc0e382edb4e7cfeeb01a88091ed158d21d24aa2a5e7678c36d5ec49"} Jan 24 08:59:15 crc kubenswrapper[4705]: I0124 08:59:15.403775 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7cvlf_a84e98fc-8911-4fe1-8242-e906ccfdb277/control-plane-machine-set-operator/0.log" Jan 24 08:59:15 crc kubenswrapper[4705]: I0124 08:59:15.578399 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqzgc_1467a368-ffe2-4fd5-abca-e42018890e40/kube-rbac-proxy/0.log" Jan 24 08:59:15 crc kubenswrapper[4705]: I0124 08:59:15.579360 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqzgc_1467a368-ffe2-4fd5-abca-e42018890e40/machine-api-operator/0.log" Jan 24 08:59:29 crc kubenswrapper[4705]: I0124 08:59:29.390357 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7d248_b9303a69-3000-46da-a5eb-4c08989db796/cert-manager-controller/0.log" Jan 24 08:59:29 crc kubenswrapper[4705]: I0124 08:59:29.585875 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xm7cv_5fd12a13-2f0a-45ca-99d8-87e45f8f5743/cert-manager-cainjector/0.log" Jan 24 08:59:29 crc kubenswrapper[4705]: I0124 08:59:29.644329 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dz4zc_0d0523a0-74f2-455b-be13-f2c764d4b4e3/cert-manager-webhook/0.log" Jan 24 08:59:43 crc kubenswrapper[4705]: I0124 08:59:43.215462 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zc7nf_54c5f862-8725-4f20-9624-c854d2b48634/nmstate-console-plugin/0.log" Jan 24 08:59:43 crc kubenswrapper[4705]: I0124 08:59:43.391922 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-79zvw_8cd45ccc-0402-4c9f-aec3-a5016dd8f8bb/nmstate-handler/0.log" Jan 24 08:59:43 crc kubenswrapper[4705]: I0124 08:59:43.524307 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-pfqmg_f63e787b-b789-4a2f-a0f4-fa433cefe73c/nmstate-metrics/0.log" Jan 24 08:59:43 crc kubenswrapper[4705]: I0124 08:59:43.567776 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-pfqmg_f63e787b-b789-4a2f-a0f4-fa433cefe73c/kube-rbac-proxy/0.log" Jan 24 08:59:43 crc kubenswrapper[4705]: I0124 08:59:43.668538 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hn9tt_0bf1a582-10a6-4207-a953-16b7751ea5ef/nmstate-operator/0.log" Jan 24 08:59:43 crc kubenswrapper[4705]: I0124 08:59:43.745961 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-h4stc_f9ff4190-6e7e-4e11-8287-1f8c6aa35088/nmstate-webhook/0.log" Jan 24 08:59:59 crc kubenswrapper[4705]: I0124 08:59:59.763769 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q6rz4_205fc2d4-b488-4221-b24e-c97e1447deb9/prometheus-operator/0.log" Jan 24 08:59:59 crc kubenswrapper[4705]: I0124 08:59:59.988879 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf_b32cfae6-0b9f-4565-b802-c667cc6def0a/prometheus-operator-admission-webhook/0.log" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.036453 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v_3747c1cc-2cec-4baf-b6f2-14109753b841/prometheus-operator-admission-webhook/0.log" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.183977 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz"] Jan 24 09:00:00 crc kubenswrapper[4705]: E0124 09:00:00.184555 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b64b97-cf32-4956-b3a5-80389e6d1ea5" containerName="container-00" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.184578 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b64b97-cf32-4956-b3a5-80389e6d1ea5" containerName="container-00" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.184895 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b64b97-cf32-4956-b3a5-80389e6d1ea5" containerName="container-00" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.187777 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.190855 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.193813 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.200868 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz"] Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.287756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31572883-55f1-49b4-81fa-65dd7a5c3358-secret-volume\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.288294 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31572883-55f1-49b4-81fa-65dd7a5c3358-config-volume\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.288673 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb99f\" (UniqueName: \"kubernetes.io/projected/31572883-55f1-49b4-81fa-65dd7a5c3358-kube-api-access-pb99f\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.307853 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hjjgh_348d157c-9094-4f31-aadf-44f7a46f561b/operator/0.log" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.361706 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l6h9z_3da07060-d23f-4ecc-9a3c-9d659a0ab121/perses-operator/0.log" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.390459 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb99f\" (UniqueName: \"kubernetes.io/projected/31572883-55f1-49b4-81fa-65dd7a5c3358-kube-api-access-pb99f\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.390574 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31572883-55f1-49b4-81fa-65dd7a5c3358-secret-volume\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.390714 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31572883-55f1-49b4-81fa-65dd7a5c3358-config-volume\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.391801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31572883-55f1-49b4-81fa-65dd7a5c3358-config-volume\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.400455 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31572883-55f1-49b4-81fa-65dd7a5c3358-secret-volume\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.416226 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb99f\" (UniqueName: \"kubernetes.io/projected/31572883-55f1-49b4-81fa-65dd7a5c3358-kube-api-access-pb99f\") pod \"collect-profiles-29487420-c68vz\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.516634 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:00 crc kubenswrapper[4705]: I0124 09:00:00.988689 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz"] Jan 24 09:00:01 crc kubenswrapper[4705]: I0124 09:00:01.998578 4705 generic.go:334] "Generic (PLEG): container finished" podID="31572883-55f1-49b4-81fa-65dd7a5c3358" containerID="6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410" exitCode=0 Jan 24 09:00:01 crc kubenswrapper[4705]: I0124 09:00:01.998700 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" event={"ID":"31572883-55f1-49b4-81fa-65dd7a5c3358","Type":"ContainerDied","Data":"6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410"} Jan 24 09:00:02 crc kubenswrapper[4705]: I0124 09:00:01.999612 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" event={"ID":"31572883-55f1-49b4-81fa-65dd7a5c3358","Type":"ContainerStarted","Data":"de463ef0a920d3c2c6c14398b0ad1230bdc02b1ef9d3c66f37d56b093faad8e7"} Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.377079 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.469436 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31572883-55f1-49b4-81fa-65dd7a5c3358-config-volume\") pod \"31572883-55f1-49b4-81fa-65dd7a5c3358\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.469799 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb99f\" (UniqueName: \"kubernetes.io/projected/31572883-55f1-49b4-81fa-65dd7a5c3358-kube-api-access-pb99f\") pod \"31572883-55f1-49b4-81fa-65dd7a5c3358\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.469886 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31572883-55f1-49b4-81fa-65dd7a5c3358-secret-volume\") pod \"31572883-55f1-49b4-81fa-65dd7a5c3358\" (UID: \"31572883-55f1-49b4-81fa-65dd7a5c3358\") " Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.470249 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31572883-55f1-49b4-81fa-65dd7a5c3358-config-volume" (OuterVolumeSpecName: "config-volume") pod "31572883-55f1-49b4-81fa-65dd7a5c3358" (UID: "31572883-55f1-49b4-81fa-65dd7a5c3358"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.470462 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31572883-55f1-49b4-81fa-65dd7a5c3358-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.475871 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31572883-55f1-49b4-81fa-65dd7a5c3358-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31572883-55f1-49b4-81fa-65dd7a5c3358" (UID: "31572883-55f1-49b4-81fa-65dd7a5c3358"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.485275 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31572883-55f1-49b4-81fa-65dd7a5c3358-kube-api-access-pb99f" (OuterVolumeSpecName: "kube-api-access-pb99f") pod "31572883-55f1-49b4-81fa-65dd7a5c3358" (UID: "31572883-55f1-49b4-81fa-65dd7a5c3358"). InnerVolumeSpecName "kube-api-access-pb99f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.572725 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb99f\" (UniqueName: \"kubernetes.io/projected/31572883-55f1-49b4-81fa-65dd7a5c3358-kube-api-access-pb99f\") on node \"crc\" DevicePath \"\"" Jan 24 09:00:03 crc kubenswrapper[4705]: I0124 09:00:03.572767 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31572883-55f1-49b4-81fa-65dd7a5c3358-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 09:00:04 crc kubenswrapper[4705]: I0124 09:00:04.018910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" event={"ID":"31572883-55f1-49b4-81fa-65dd7a5c3358","Type":"ContainerDied","Data":"de463ef0a920d3c2c6c14398b0ad1230bdc02b1ef9d3c66f37d56b093faad8e7"} Jan 24 09:00:04 crc kubenswrapper[4705]: I0124 09:00:04.018950 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de463ef0a920d3c2c6c14398b0ad1230bdc02b1ef9d3c66f37d56b093faad8e7" Jan 24 09:00:04 crc kubenswrapper[4705]: I0124 09:00:04.018975 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487420-c68vz" Jan 24 09:00:04 crc kubenswrapper[4705]: I0124 09:00:04.468309 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn"] Jan 24 09:00:04 crc kubenswrapper[4705]: I0124 09:00:04.477221 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487375-5h8gn"] Jan 24 09:00:05 crc kubenswrapper[4705]: I0124 09:00:05.588931 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202e4084-2245-4d25-a122-80c97f3e7824" path="/var/lib/kubelet/pods/202e4084-2245-4d25-a122-80c97f3e7824/volumes" Jan 24 09:00:06 crc kubenswrapper[4705]: E0124 09:00:06.537050 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31572883_55f1_49b4_81fa_65dd7a5c3358.slice/crio-conmon-6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410.scope\": RecentStats: unable to find data in memory cache]" Jan 24 09:00:09 crc kubenswrapper[4705]: I0124 09:00:09.162653 4705 scope.go:117] "RemoveContainer" containerID="bacc8e34e26e38fa180dd1ce2e7dd36e7a679576d7f23960f00d28afc17202bc" Jan 24 09:00:11 crc kubenswrapper[4705]: I0124 09:00:11.945072 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7x6qw"] Jan 24 09:00:11 crc kubenswrapper[4705]: E0124 09:00:11.947275 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31572883-55f1-49b4-81fa-65dd7a5c3358" containerName="collect-profiles" Jan 24 09:00:11 crc kubenswrapper[4705]: I0124 09:00:11.947411 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="31572883-55f1-49b4-81fa-65dd7a5c3358" containerName="collect-profiles" Jan 24 09:00:11 crc kubenswrapper[4705]: I0124 09:00:11.947814 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="31572883-55f1-49b4-81fa-65dd7a5c3358" containerName="collect-profiles" Jan 24 09:00:11 crc kubenswrapper[4705]: I0124 09:00:11.950364 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:11 crc kubenswrapper[4705]: I0124 09:00:11.958075 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7x6qw"] Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.052037 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwtcx\" (UniqueName: \"kubernetes.io/projected/db34af96-5abc-445a-ad9b-d22be4a89898-kube-api-access-lwtcx\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.052601 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-catalog-content\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.052848 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-utilities\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.155379 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-utilities\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.155507 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwtcx\" (UniqueName: \"kubernetes.io/projected/db34af96-5abc-445a-ad9b-d22be4a89898-kube-api-access-lwtcx\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.155725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-catalog-content\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.156336 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-utilities\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.156450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-catalog-content\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.191793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwtcx\" (UniqueName: \"kubernetes.io/projected/db34af96-5abc-445a-ad9b-d22be4a89898-kube-api-access-lwtcx\") pod \"community-operators-7x6qw\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.270046 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:12 crc kubenswrapper[4705]: I0124 09:00:12.753506 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7x6qw"] Jan 24 09:00:13 crc kubenswrapper[4705]: W0124 09:00:13.248422 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb34af96_5abc_445a_ad9b_d22be4a89898.slice/crio-4d4899ab5549d73f098badfa2cb179d3204684483c4d8fc17f75f73c97b8a8b7 WatchSource:0}: Error finding container 4d4899ab5549d73f098badfa2cb179d3204684483c4d8fc17f75f73c97b8a8b7: Status 404 returned error can't find the container with id 4d4899ab5549d73f098badfa2cb179d3204684483c4d8fc17f75f73c97b8a8b7 Jan 24 09:00:14 crc kubenswrapper[4705]: I0124 09:00:14.124776 4705 generic.go:334] "Generic (PLEG): container finished" podID="db34af96-5abc-445a-ad9b-d22be4a89898" containerID="42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3" exitCode=0 Jan 24 09:00:14 crc kubenswrapper[4705]: I0124 09:00:14.124999 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x6qw" event={"ID":"db34af96-5abc-445a-ad9b-d22be4a89898","Type":"ContainerDied","Data":"42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3"} Jan 24 09:00:14 crc kubenswrapper[4705]: I0124 09:00:14.125268 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x6qw" event={"ID":"db34af96-5abc-445a-ad9b-d22be4a89898","Type":"ContainerStarted","Data":"4d4899ab5549d73f098badfa2cb179d3204684483c4d8fc17f75f73c97b8a8b7"} Jan 24 09:00:14 crc kubenswrapper[4705]: I0124 09:00:14.127109 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.142564 4705 generic.go:334] "Generic (PLEG): container finished" podID="db34af96-5abc-445a-ad9b-d22be4a89898" containerID="004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b" exitCode=0 Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.143006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x6qw" event={"ID":"db34af96-5abc-445a-ad9b-d22be4a89898","Type":"ContainerDied","Data":"004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b"} Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.155952 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zq9jr_24055761-2526-4195-98fd-ba2b83bc9f20/kube-rbac-proxy/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.187548 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zq9jr_24055761-2526-4195-98fd-ba2b83bc9f20/controller/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.359513 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.556347 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.595372 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.597784 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.625362 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.753036 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.776769 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.820154 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 09:00:16 crc kubenswrapper[4705]: E0124 09:00:16.836903 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31572883_55f1_49b4_81fa_65dd7a5c3358.slice/crio-conmon-6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410.scope\": RecentStats: unable to find data in memory cache]" Jan 24 09:00:16 crc kubenswrapper[4705]: I0124 09:00:16.895392 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.049174 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-frr-files/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.113352 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/controller/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.116263 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-reloader/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.126351 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/cp-metrics/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.156835 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x6qw" event={"ID":"db34af96-5abc-445a-ad9b-d22be4a89898","Type":"ContainerStarted","Data":"5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20"} Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.185054 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7x6qw" podStartSLOduration=3.663698879 podStartE2EDuration="6.185012781s" podCreationTimestamp="2026-01-24 09:00:11 +0000 UTC" firstStartedPulling="2026-01-24 09:00:14.126896161 +0000 UTC m=+4752.846769449" lastFinishedPulling="2026-01-24 09:00:16.648210063 +0000 UTC m=+4755.368083351" observedRunningTime="2026-01-24 09:00:17.179806853 +0000 UTC m=+4755.899680141" watchObservedRunningTime="2026-01-24 09:00:17.185012781 +0000 UTC m=+4755.904886069" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.348069 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/kube-rbac-proxy/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.398857 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/frr-metrics/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.399438 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/kube-rbac-proxy-frr/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.595502 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/reloader/0.log" Jan 24 09:00:17 crc kubenswrapper[4705]: I0124 09:00:17.757939 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-47zdb_7ed76590-151f-416d-b485-dd0ec7a67fcc/frr-k8s-webhook-server/0.log" Jan 24 09:00:18 crc kubenswrapper[4705]: I0124 09:00:18.082839 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b969cdf7-4cn5k_6709cabe-fa28-43e6-9999-2da688ab6871/manager/0.log" Jan 24 09:00:18 crc kubenswrapper[4705]: I0124 09:00:18.333301 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bfbbd9768-tw7cf_67f67aca-78d5-495d-a47d-ce2fdefc502b/webhook-server/0.log" Jan 24 09:00:18 crc kubenswrapper[4705]: I0124 09:00:18.476365 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ff8n7_b43877b9-1325-4a18-abe6-0aea41048802/kube-rbac-proxy/0.log" Jan 24 09:00:19 crc kubenswrapper[4705]: I0124 09:00:19.214395 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ff8n7_b43877b9-1325-4a18-abe6-0aea41048802/speaker/0.log" Jan 24 09:00:19 crc kubenswrapper[4705]: I0124 09:00:19.277634 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-crt2p_d9a20c75-aae2-4b5c-9c15-615590399718/frr/0.log" Jan 24 09:00:22 crc kubenswrapper[4705]: I0124 09:00:22.270583 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:22 crc kubenswrapper[4705]: I0124 09:00:22.271203 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:22 crc kubenswrapper[4705]: I0124 09:00:22.983944 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:23 crc kubenswrapper[4705]: I0124 09:00:23.357053 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:23 crc kubenswrapper[4705]: I0124 09:00:23.445740 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7x6qw"] Jan 24 09:00:25 crc kubenswrapper[4705]: I0124 09:00:25.312238 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7x6qw" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="registry-server" containerID="cri-o://5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20" gracePeriod=2 Jan 24 09:00:25 crc kubenswrapper[4705]: I0124 09:00:25.911781 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.086879 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-catalog-content\") pod \"db34af96-5abc-445a-ad9b-d22be4a89898\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.087145 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwtcx\" (UniqueName: \"kubernetes.io/projected/db34af96-5abc-445a-ad9b-d22be4a89898-kube-api-access-lwtcx\") pod \"db34af96-5abc-445a-ad9b-d22be4a89898\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.087220 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-utilities\") pod \"db34af96-5abc-445a-ad9b-d22be4a89898\" (UID: \"db34af96-5abc-445a-ad9b-d22be4a89898\") " Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.088034 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-utilities" (OuterVolumeSpecName: "utilities") pod "db34af96-5abc-445a-ad9b-d22be4a89898" (UID: "db34af96-5abc-445a-ad9b-d22be4a89898"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.096251 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db34af96-5abc-445a-ad9b-d22be4a89898-kube-api-access-lwtcx" (OuterVolumeSpecName: "kube-api-access-lwtcx") pod "db34af96-5abc-445a-ad9b-d22be4a89898" (UID: "db34af96-5abc-445a-ad9b-d22be4a89898"). InnerVolumeSpecName "kube-api-access-lwtcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.138268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db34af96-5abc-445a-ad9b-d22be4a89898" (UID: "db34af96-5abc-445a-ad9b-d22be4a89898"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.189054 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.189084 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwtcx\" (UniqueName: \"kubernetes.io/projected/db34af96-5abc-445a-ad9b-d22be4a89898-kube-api-access-lwtcx\") on node \"crc\" DevicePath \"\"" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.189095 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db34af96-5abc-445a-ad9b-d22be4a89898-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.324254 4705 generic.go:334] "Generic (PLEG): container finished" podID="db34af96-5abc-445a-ad9b-d22be4a89898" containerID="5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20" exitCode=0 Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.324300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x6qw" event={"ID":"db34af96-5abc-445a-ad9b-d22be4a89898","Type":"ContainerDied","Data":"5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20"} Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.324336 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x6qw" event={"ID":"db34af96-5abc-445a-ad9b-d22be4a89898","Type":"ContainerDied","Data":"4d4899ab5549d73f098badfa2cb179d3204684483c4d8fc17f75f73c97b8a8b7"} Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.324359 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x6qw" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.324383 4705 scope.go:117] "RemoveContainer" containerID="5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.344531 4705 scope.go:117] "RemoveContainer" containerID="004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.364948 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7x6qw"] Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.380809 4705 scope.go:117] "RemoveContainer" containerID="42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.386954 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7x6qw"] Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.426378 4705 scope.go:117] "RemoveContainer" containerID="5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20" Jan 24 09:00:26 crc kubenswrapper[4705]: E0124 09:00:26.426866 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20\": container with ID starting with 5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20 not found: ID does not exist" containerID="5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.426919 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20"} err="failed to get container status \"5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20\": rpc error: code = NotFound desc = could not find container \"5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20\": container with ID starting with 5d58fb90401555f5a2a671964175d572c2893aea505a31d00d1c61fffc729f20 not found: ID does not exist" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.426946 4705 scope.go:117] "RemoveContainer" containerID="004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b" Jan 24 09:00:26 crc kubenswrapper[4705]: E0124 09:00:26.427380 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b\": container with ID starting with 004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b not found: ID does not exist" containerID="004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.427420 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b"} err="failed to get container status \"004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b\": rpc error: code = NotFound desc = could not find container \"004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b\": container with ID starting with 004c163db2c22d1177d9f1c6c39d70da284446a2afbfb0a458933022c536d62b not found: ID does not exist" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.427448 4705 scope.go:117] "RemoveContainer" containerID="42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3" Jan 24 09:00:26 crc kubenswrapper[4705]: E0124 09:00:26.427742 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3\": container with ID starting with 42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3 not found: ID does not exist" containerID="42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3" Jan 24 09:00:26 crc kubenswrapper[4705]: I0124 09:00:26.427768 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3"} err="failed to get container status \"42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3\": rpc error: code = NotFound desc = could not find container \"42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3\": container with ID starting with 42266d3e0ef6c96f727bf31636f4bdf24f88e74d3c70beb45f0bf1b6eec84fd3 not found: ID does not exist" Jan 24 09:00:27 crc kubenswrapper[4705]: E0124 09:00:27.092336 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31572883_55f1_49b4_81fa_65dd7a5c3358.slice/crio-conmon-6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410.scope\": RecentStats: unable to find data in memory cache]" Jan 24 09:00:27 crc kubenswrapper[4705]: I0124 09:00:27.610045 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" path="/var/lib/kubelet/pods/db34af96-5abc-445a-ad9b-d22be4a89898/volumes" Jan 24 09:00:32 crc kubenswrapper[4705]: I0124 09:00:32.802547 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/util/0.log" Jan 24 09:00:32 crc kubenswrapper[4705]: I0124 09:00:32.987771 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/pull/0.log" Jan 24 09:00:32 crc kubenswrapper[4705]: I0124 09:00:32.994048 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/util/0.log" Jan 24 09:00:33 crc kubenswrapper[4705]: I0124 09:00:33.159802 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/pull/0.log" Jan 24 09:00:33 crc kubenswrapper[4705]: I0124 09:00:33.292297 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/util/0.log" Jan 24 09:00:33 crc kubenswrapper[4705]: I0124 09:00:33.293205 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/pull/0.log" Jan 24 09:00:33 crc kubenswrapper[4705]: I0124 09:00:33.329194 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcggfpp_c6531f50-1387-4cfa-946d-ef139131e7d0/extract/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.226461 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/util/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.368262 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/util/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.386372 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/pull/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.418121 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/pull/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.570077 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/util/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.622350 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/extract/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.622938 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tx72f_c803c260-b4e8-4052-9f31-174e7abed57d/pull/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.769446 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/util/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.972129 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/pull/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.974758 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/pull/0.log" Jan 24 09:00:34 crc kubenswrapper[4705]: I0124 09:00:34.992491 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/util/0.log" Jan 24 09:00:35 crc kubenswrapper[4705]: I0124 09:00:35.291032 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/util/0.log" Jan 24 09:00:35 crc kubenswrapper[4705]: I0124 09:00:35.291047 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/pull/0.log" Jan 24 09:00:35 crc kubenswrapper[4705]: I0124 09:00:35.344489 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087p6sz_8d7c435c-74d0-498b-97ba-bdc522a1b144/extract/0.log" Jan 24 09:00:35 crc kubenswrapper[4705]: I0124 09:00:35.485336 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-utilities/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.118685 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-utilities/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.154052 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-content/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.155148 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-content/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.306108 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-utilities/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.342586 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/extract-content/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.702646 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-utilities/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.857069 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-utilities/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.864784 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-content/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.900724 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-content/0.log" Jan 24 09:00:36 crc kubenswrapper[4705]: I0124 09:00:36.958849 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rmh7t_2fbcc45f-578c-43e4-9351-4b30d72d28f9/registry-server/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.071950 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-content/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.109983 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/extract-utilities/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.189742 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-httl9_568a6099-4783-45d9-9ea8-7c856a3ddd86/marketplace-operator/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: E0124 09:00:37.366299 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31572883_55f1_49b4_81fa_65dd7a5c3358.slice/crio-conmon-6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410.scope\": RecentStats: unable to find data in memory cache]" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.406700 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-utilities/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.540156 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-content/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.541486 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-utilities/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.544809 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-srn4g_7bd1c960-7a5d-48dd-bef7-2fd94acef48d/registry-server/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.610560 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-content/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.690430 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-utilities/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.713557 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/extract-content/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.801857 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-utilities/0.log" Jan 24 09:00:37 crc kubenswrapper[4705]: I0124 09:00:37.895157 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4ffm5_d1c9b3f7-c198-42c1-80b8-0b8245e1f3ea/registry-server/0.log" Jan 24 09:00:38 crc kubenswrapper[4705]: I0124 09:00:38.753176 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-content/0.log" Jan 24 09:00:38 crc kubenswrapper[4705]: I0124 09:00:38.753356 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-utilities/0.log" Jan 24 09:00:38 crc kubenswrapper[4705]: I0124 09:00:38.753472 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-content/0.log" Jan 24 09:00:39 crc kubenswrapper[4705]: I0124 09:00:39.031141 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-utilities/0.log" Jan 24 09:00:39 crc kubenswrapper[4705]: I0124 09:00:39.060049 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/extract-content/0.log" Jan 24 09:00:39 crc kubenswrapper[4705]: I0124 09:00:39.707991 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4rlgc_2681beb5-9536-4c9d-9221-18da1c8e244b/registry-server/0.log" Jan 24 09:00:47 crc kubenswrapper[4705]: E0124 09:00:47.642116 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31572883_55f1_49b4_81fa_65dd7a5c3358.slice/crio-conmon-6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410.scope\": RecentStats: unable to find data in memory cache]" Jan 24 09:00:54 crc kubenswrapper[4705]: I0124 09:00:54.171130 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q6rz4_205fc2d4-b488-4221-b24e-c97e1447deb9/prometheus-operator/0.log" Jan 24 09:00:54 crc kubenswrapper[4705]: I0124 09:00:54.175723 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-t2lqf_b32cfae6-0b9f-4565-b802-c667cc6def0a/prometheus-operator-admission-webhook/0.log" Jan 24 09:00:54 crc kubenswrapper[4705]: I0124 09:00:54.177925 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b98fccd64-v2p7v_3747c1cc-2cec-4baf-b6f2-14109753b841/prometheus-operator-admission-webhook/0.log" Jan 24 09:00:54 crc kubenswrapper[4705]: I0124 09:00:54.332535 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l6h9z_3da07060-d23f-4ecc-9a3c-9d659a0ab121/perses-operator/0.log" Jan 24 09:00:54 crc kubenswrapper[4705]: I0124 09:00:54.359690 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hjjgh_348d157c-9094-4f31-aadf-44f7a46f561b/operator/0.log" Jan 24 09:00:57 crc kubenswrapper[4705]: E0124 09:00:57.907757 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31572883_55f1_49b4_81fa_65dd7a5c3358.slice/crio-conmon-6f5008b028ba2639ec87dc0f9ace6fcd71e059215e68f32ddf30257cfff23410.scope\": RecentStats: unable to find data in memory cache]" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.156288 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29487421-nb7bz"] Jan 24 09:01:00 crc kubenswrapper[4705]: E0124 09:01:00.157171 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="extract-utilities" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.157190 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="extract-utilities" Jan 24 09:01:00 crc kubenswrapper[4705]: E0124 09:01:00.157216 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="extract-content" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.157224 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="extract-content" Jan 24 09:01:00 crc kubenswrapper[4705]: E0124 09:01:00.157244 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="registry-server" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.157252 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="registry-server" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.159663 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="db34af96-5abc-445a-ad9b-d22be4a89898" containerName="registry-server" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.160761 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.171982 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29487421-nb7bz"] Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.219464 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64r7\" (UniqueName: \"kubernetes.io/projected/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-kube-api-access-z64r7\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.219602 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-fernet-keys\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.219646 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-combined-ca-bundle\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.219681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-config-data\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.321787 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z64r7\" (UniqueName: \"kubernetes.io/projected/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-kube-api-access-z64r7\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.322336 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-fernet-keys\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.322387 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-combined-ca-bundle\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.322423 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-config-data\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.328927 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-fernet-keys\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.332888 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-config-data\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.349116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-combined-ca-bundle\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.397448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z64r7\" (UniqueName: \"kubernetes.io/projected/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-kube-api-access-z64r7\") pod \"keystone-cron-29487421-nb7bz\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:00 crc kubenswrapper[4705]: I0124 09:01:00.482630 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:01 crc kubenswrapper[4705]: I0124 09:01:01.007140 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29487421-nb7bz"] Jan 24 09:01:01 crc kubenswrapper[4705]: I0124 09:01:01.995170 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487421-nb7bz" event={"ID":"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b","Type":"ContainerStarted","Data":"970d1ce4adb139a64bfec07275984ef53a306199a91540e69aac66de3c93263c"} Jan 24 09:01:01 crc kubenswrapper[4705]: I0124 09:01:01.996728 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487421-nb7bz" event={"ID":"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b","Type":"ContainerStarted","Data":"f5be7ba8771076fa009bb742f533d45bf7cb99e93a7a29c26a6a3e8cd4125d98"} Jan 24 09:01:02 crc kubenswrapper[4705]: I0124 09:01:02.015929 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29487421-nb7bz" podStartSLOduration=2.015904072 podStartE2EDuration="2.015904072s" podCreationTimestamp="2026-01-24 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 09:01:02.010789527 +0000 UTC m=+4800.730662815" watchObservedRunningTime="2026-01-24 09:01:02.015904072 +0000 UTC m=+4800.735777360" Jan 24 09:01:05 crc kubenswrapper[4705]: I0124 09:01:05.025595 4705 generic.go:334] "Generic (PLEG): container finished" podID="a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" containerID="970d1ce4adb139a64bfec07275984ef53a306199a91540e69aac66de3c93263c" exitCode=0 Jan 24 09:01:05 crc kubenswrapper[4705]: I0124 09:01:05.026025 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487421-nb7bz" event={"ID":"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b","Type":"ContainerDied","Data":"970d1ce4adb139a64bfec07275984ef53a306199a91540e69aac66de3c93263c"} Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.541110 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.723266 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z64r7\" (UniqueName: \"kubernetes.io/projected/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-kube-api-access-z64r7\") pod \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.723338 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-combined-ca-bundle\") pod \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.723479 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-fernet-keys\") pod \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.723535 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-config-data\") pod \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\" (UID: \"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b\") " Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.729988 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" (UID: "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.730030 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-kube-api-access-z64r7" (OuterVolumeSpecName: "kube-api-access-z64r7") pod "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" (UID: "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b"). InnerVolumeSpecName "kube-api-access-z64r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.825996 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z64r7\" (UniqueName: \"kubernetes.io/projected/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-kube-api-access-z64r7\") on node \"crc\" DevicePath \"\"" Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.826025 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.945536 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" (UID: "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 09:01:06 crc kubenswrapper[4705]: I0124 09:01:06.946038 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-config-data" (OuterVolumeSpecName: "config-data") pod "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" (UID: "a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 09:01:07 crc kubenswrapper[4705]: I0124 09:01:07.033998 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 09:01:07 crc kubenswrapper[4705]: I0124 09:01:07.034032 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 09:01:07 crc kubenswrapper[4705]: I0124 09:01:07.070139 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487421-nb7bz" event={"ID":"a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b","Type":"ContainerDied","Data":"f5be7ba8771076fa009bb742f533d45bf7cb99e93a7a29c26a6a3e8cd4125d98"} Jan 24 09:01:07 crc kubenswrapper[4705]: I0124 09:01:07.070187 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5be7ba8771076fa009bb742f533d45bf7cb99e93a7a29c26a6a3e8cd4125d98" Jan 24 09:01:07 crc kubenswrapper[4705]: I0124 09:01:07.070234 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487421-nb7bz" Jan 24 09:01:37 crc kubenswrapper[4705]: I0124 09:01:37.070961 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 09:01:37 crc kubenswrapper[4705]: I0124 09:01:37.071573 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 09:02:07 crc kubenswrapper[4705]: I0124 09:02:07.071433 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 09:02:07 crc kubenswrapper[4705]: I0124 09:02:07.072010 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.071130 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.071899 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.072030 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.073622 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81a244c9bc0e382edb4e7cfeeb01a88091ed158d21d24aa2a5e7678c36d5ec49"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.073798 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://81a244c9bc0e382edb4e7cfeeb01a88091ed158d21d24aa2a5e7678c36d5ec49" gracePeriod=600 Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.210004 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="81a244c9bc0e382edb4e7cfeeb01a88091ed158d21d24aa2a5e7678c36d5ec49" exitCode=0 Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.210097 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"81a244c9bc0e382edb4e7cfeeb01a88091ed158d21d24aa2a5e7678c36d5ec49"} Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.210411 4705 scope.go:117] "RemoveContainer" containerID="d77f33b4601fcc9caf7f6aac181835b451cd8e1d57684fecb179e62aeb71a21c" Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.213009 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerID="4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e" exitCode=0 Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.213079 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" event={"ID":"9c59f136-51de-4fe6-95c6-f00cf94c1e02","Type":"ContainerDied","Data":"4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e"} Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.213957 4705 scope.go:117] "RemoveContainer" containerID="4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e" Jan 24 09:02:37 crc kubenswrapper[4705]: I0124 09:02:37.420212 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9rcfs_must-gather-nwqc4_9c59f136-51de-4fe6-95c6-f00cf94c1e02/gather/0.log" Jan 24 09:02:38 crc kubenswrapper[4705]: I0124 09:02:38.225036 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerStarted","Data":"731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3"} Jan 24 09:02:50 crc kubenswrapper[4705]: I0124 09:02:50.477030 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9rcfs/must-gather-nwqc4"] Jan 24 09:02:50 crc kubenswrapper[4705]: I0124 09:02:50.477803 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerName="copy" containerID="cri-o://c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11" gracePeriod=2 Jan 24 09:02:50 crc kubenswrapper[4705]: I0124 09:02:50.494364 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9rcfs/must-gather-nwqc4"] Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.081172 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9rcfs_must-gather-nwqc4_9c59f136-51de-4fe6-95c6-f00cf94c1e02/copy/0.log" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.081713 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.224360 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c59f136-51de-4fe6-95c6-f00cf94c1e02-must-gather-output\") pod \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.224504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sd9\" (UniqueName: \"kubernetes.io/projected/9c59f136-51de-4fe6-95c6-f00cf94c1e02-kube-api-access-n6sd9\") pod \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\" (UID: \"9c59f136-51de-4fe6-95c6-f00cf94c1e02\") " Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.230329 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c59f136-51de-4fe6-95c6-f00cf94c1e02-kube-api-access-n6sd9" (OuterVolumeSpecName: "kube-api-access-n6sd9") pod "9c59f136-51de-4fe6-95c6-f00cf94c1e02" (UID: "9c59f136-51de-4fe6-95c6-f00cf94c1e02"). InnerVolumeSpecName "kube-api-access-n6sd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.327619 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6sd9\" (UniqueName: \"kubernetes.io/projected/9c59f136-51de-4fe6-95c6-f00cf94c1e02-kube-api-access-n6sd9\") on node \"crc\" DevicePath \"\"" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.393860 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9rcfs_must-gather-nwqc4_9c59f136-51de-4fe6-95c6-f00cf94c1e02/copy/0.log" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.401120 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerID="c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11" exitCode=143 Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.401197 4705 scope.go:117] "RemoveContainer" containerID="c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.401385 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9rcfs/must-gather-nwqc4" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.419196 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c59f136-51de-4fe6-95c6-f00cf94c1e02-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9c59f136-51de-4fe6-95c6-f00cf94c1e02" (UID: "9c59f136-51de-4fe6-95c6-f00cf94c1e02"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.429875 4705 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c59f136-51de-4fe6-95c6-f00cf94c1e02-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.478916 4705 scope.go:117] "RemoveContainer" containerID="4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.593064 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" path="/var/lib/kubelet/pods/9c59f136-51de-4fe6-95c6-f00cf94c1e02/volumes" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.618039 4705 scope.go:117] "RemoveContainer" containerID="c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11" Jan 24 09:02:51 crc kubenswrapper[4705]: E0124 09:02:51.618567 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11\": container with ID starting with c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11 not found: ID does not exist" containerID="c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.618680 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11"} err="failed to get container status \"c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11\": rpc error: code = NotFound desc = could not find container \"c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11\": container with ID starting with c2d18c8871a9c2d9a10329d1438982a6327084e466311e4f5a362e5b5d6b4c11 not found: ID does not exist" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.618760 4705 scope.go:117] "RemoveContainer" containerID="4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e" Jan 24 09:02:51 crc kubenswrapper[4705]: E0124 09:02:51.619233 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e\": container with ID starting with 4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e not found: ID does not exist" containerID="4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e" Jan 24 09:02:51 crc kubenswrapper[4705]: I0124 09:02:51.619333 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e"} err="failed to get container status \"4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e\": rpc error: code = NotFound desc = could not find container \"4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e\": container with ID starting with 4285f50d74a41786ccbfb08214040e3ebe412f7889aa64669ea127fcac882b8e not found: ID does not exist" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.108696 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mz672"] Jan 24 09:04:33 crc kubenswrapper[4705]: E0124 09:04:33.110128 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" containerName="keystone-cron" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.110151 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" containerName="keystone-cron" Jan 24 09:04:33 crc kubenswrapper[4705]: E0124 09:04:33.110241 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerName="copy" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.110286 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerName="copy" Jan 24 09:04:33 crc kubenswrapper[4705]: E0124 09:04:33.110295 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerName="gather" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.110301 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerName="gather" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.110546 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerName="copy" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.110621 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cf05f4-9bb4-4942-bc4d-ba7de98ffd5b" containerName="keystone-cron" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.110657 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c59f136-51de-4fe6-95c6-f00cf94c1e02" containerName="gather" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.112726 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.125424 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mz672"] Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.299523 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce55e46-fecd-46d4-b756-14236f1a42f4-utilities\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.299978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jp5s\" (UniqueName: \"kubernetes.io/projected/fce55e46-fecd-46d4-b756-14236f1a42f4-kube-api-access-4jp5s\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.300022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce55e46-fecd-46d4-b756-14236f1a42f4-catalog-content\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.401662 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jp5s\" (UniqueName: \"kubernetes.io/projected/fce55e46-fecd-46d4-b756-14236f1a42f4-kube-api-access-4jp5s\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.401723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce55e46-fecd-46d4-b756-14236f1a42f4-catalog-content\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.401832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce55e46-fecd-46d4-b756-14236f1a42f4-utilities\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.402334 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fce55e46-fecd-46d4-b756-14236f1a42f4-utilities\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.402982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fce55e46-fecd-46d4-b756-14236f1a42f4-catalog-content\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.427612 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jp5s\" (UniqueName: \"kubernetes.io/projected/fce55e46-fecd-46d4-b756-14236f1a42f4-kube-api-access-4jp5s\") pod \"certified-operators-mz672\" (UID: \"fce55e46-fecd-46d4-b756-14236f1a42f4\") " pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.642981 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:33 crc kubenswrapper[4705]: I0124 09:04:33.961553 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mz672"] Jan 24 09:04:34 crc kubenswrapper[4705]: I0124 09:04:34.996694 4705 generic.go:334] "Generic (PLEG): container finished" podID="fce55e46-fecd-46d4-b756-14236f1a42f4" containerID="266a88f9e241852bafd1c85e375e0cf4959d907b34b09806b9ecd3b4c4f3d7b7" exitCode=0 Jan 24 09:04:34 crc kubenswrapper[4705]: I0124 09:04:34.997292 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz672" event={"ID":"fce55e46-fecd-46d4-b756-14236f1a42f4","Type":"ContainerDied","Data":"266a88f9e241852bafd1c85e375e0cf4959d907b34b09806b9ecd3b4c4f3d7b7"} Jan 24 09:04:34 crc kubenswrapper[4705]: I0124 09:04:34.997343 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz672" event={"ID":"fce55e46-fecd-46d4-b756-14236f1a42f4","Type":"ContainerStarted","Data":"6018b30ad86d980305d8dce80c4c0767ad209fdf296ed7af902b2ba61aa40795"} Jan 24 09:04:37 crc kubenswrapper[4705]: I0124 09:04:37.071355 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 09:04:37 crc kubenswrapper[4705]: I0124 09:04:37.071739 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 09:04:40 crc kubenswrapper[4705]: I0124 09:04:40.064899 4705 generic.go:334] "Generic (PLEG): container finished" podID="fce55e46-fecd-46d4-b756-14236f1a42f4" containerID="c297c4f94b6614fad2eca299181c112105f3982b420bc84df16cd3eede32b74b" exitCode=0 Jan 24 09:04:40 crc kubenswrapper[4705]: I0124 09:04:40.065020 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz672" event={"ID":"fce55e46-fecd-46d4-b756-14236f1a42f4","Type":"ContainerDied","Data":"c297c4f94b6614fad2eca299181c112105f3982b420bc84df16cd3eede32b74b"} Jan 24 09:04:41 crc kubenswrapper[4705]: I0124 09:04:41.081649 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz672" event={"ID":"fce55e46-fecd-46d4-b756-14236f1a42f4","Type":"ContainerStarted","Data":"d3b6fb23c803cc711ef9c768ca355a883c3d23b86ecefb0bf118b55333414940"} Jan 24 09:04:41 crc kubenswrapper[4705]: I0124 09:04:41.105576 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mz672" podStartSLOduration=2.610951725 podStartE2EDuration="8.105531229s" podCreationTimestamp="2026-01-24 09:04:33 +0000 UTC" firstStartedPulling="2026-01-24 09:04:35.002984922 +0000 UTC m=+5013.722858210" lastFinishedPulling="2026-01-24 09:04:40.497564416 +0000 UTC m=+5019.217437714" observedRunningTime="2026-01-24 09:04:41.10197895 +0000 UTC m=+5019.821852298" watchObservedRunningTime="2026-01-24 09:04:41.105531229 +0000 UTC m=+5019.825404517" Jan 24 09:04:43 crc kubenswrapper[4705]: I0124 09:04:43.644808 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:43 crc kubenswrapper[4705]: I0124 09:04:43.645100 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:43 crc kubenswrapper[4705]: I0124 09:04:43.859221 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:53 crc kubenswrapper[4705]: I0124 09:04:53.699180 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mz672" Jan 24 09:04:53 crc kubenswrapper[4705]: I0124 09:04:53.771876 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mz672"] Jan 24 09:04:53 crc kubenswrapper[4705]: I0124 09:04:53.821328 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmh7t"] Jan 24 09:04:53 crc kubenswrapper[4705]: I0124 09:04:53.821604 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rmh7t" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="registry-server" containerID="cri-o://658e49e3ac62c9d2c4d66fcd75914d3190d305d3d457fc935c49cfd4b9381e10" gracePeriod=2 Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.236287 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerID="658e49e3ac62c9d2c4d66fcd75914d3190d305d3d457fc935c49cfd4b9381e10" exitCode=0 Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.236674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmh7t" event={"ID":"2fbcc45f-578c-43e4-9351-4b30d72d28f9","Type":"ContainerDied","Data":"658e49e3ac62c9d2c4d66fcd75914d3190d305d3d457fc935c49cfd4b9381e10"} Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.477098 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.570495 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrt5k\" (UniqueName: \"kubernetes.io/projected/2fbcc45f-578c-43e4-9351-4b30d72d28f9-kube-api-access-zrt5k\") pod \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.570549 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-catalog-content\") pod \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.570593 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-utilities\") pod \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\" (UID: \"2fbcc45f-578c-43e4-9351-4b30d72d28f9\") " Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.574589 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-utilities" (OuterVolumeSpecName: "utilities") pod "2fbcc45f-578c-43e4-9351-4b30d72d28f9" (UID: "2fbcc45f-578c-43e4-9351-4b30d72d28f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.580578 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbcc45f-578c-43e4-9351-4b30d72d28f9-kube-api-access-zrt5k" (OuterVolumeSpecName: "kube-api-access-zrt5k") pod "2fbcc45f-578c-43e4-9351-4b30d72d28f9" (UID: "2fbcc45f-578c-43e4-9351-4b30d72d28f9"). InnerVolumeSpecName "kube-api-access-zrt5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.651864 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fbcc45f-578c-43e4-9351-4b30d72d28f9" (UID: "2fbcc45f-578c-43e4-9351-4b30d72d28f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.674609 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrt5k\" (UniqueName: \"kubernetes.io/projected/2fbcc45f-578c-43e4-9351-4b30d72d28f9-kube-api-access-zrt5k\") on node \"crc\" DevicePath \"\"" Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.674877 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 09:04:54 crc kubenswrapper[4705]: I0124 09:04:54.674888 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbcc45f-578c-43e4-9351-4b30d72d28f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.246371 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmh7t" event={"ID":"2fbcc45f-578c-43e4-9351-4b30d72d28f9","Type":"ContainerDied","Data":"41183928698f2aab69ca463f7c2a2632bf923b1dec13992de3286b98a5547b65"} Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.246429 4705 scope.go:117] "RemoveContainer" containerID="658e49e3ac62c9d2c4d66fcd75914d3190d305d3d457fc935c49cfd4b9381e10" Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.246462 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmh7t" Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.268813 4705 scope.go:117] "RemoveContainer" containerID="e48c0e72ec4d581242d7b99dca1b90761607d7c14741fc54c495d9871bae535c" Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.280083 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmh7t"] Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.300863 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rmh7t"] Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.458811 4705 scope.go:117] "RemoveContainer" containerID="9dfdcf7b85c6530a1bcb2c331fe6645001aba6c1c09a40910b40e3ff6ddf44da" Jan 24 09:04:55 crc kubenswrapper[4705]: I0124 09:04:55.589323 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" path="/var/lib/kubelet/pods/2fbcc45f-578c-43e4-9351-4b30d72d28f9/volumes" Jan 24 09:05:07 crc kubenswrapper[4705]: I0124 09:05:07.071670 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 09:05:07 crc kubenswrapper[4705]: I0124 09:05:07.072247 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.321171 4705 patch_prober.go:28] interesting pod/machine-config-daemon-dxqp2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.321630 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.321672 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.322903 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3"} pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.322959 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerName="machine-config-daemon" containerID="cri-o://731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3" gracePeriod=600 Jan 24 09:05:37 crc kubenswrapper[4705]: E0124 09:05:37.486036 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.686963 4705 generic.go:334] "Generic (PLEG): container finished" podID="a7b3b969-5164-4f10-8758-72b7e2f4b762" containerID="731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3" exitCode=0 Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.687011 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" event={"ID":"a7b3b969-5164-4f10-8758-72b7e2f4b762","Type":"ContainerDied","Data":"731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3"} Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.687051 4705 scope.go:117] "RemoveContainer" containerID="81a244c9bc0e382edb4e7cfeeb01a88091ed158d21d24aa2a5e7678c36d5ec49" Jan 24 09:05:37 crc kubenswrapper[4705]: I0124 09:05:37.688126 4705 scope.go:117] "RemoveContainer" containerID="731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3" Jan 24 09:05:37 crc kubenswrapper[4705]: E0124 09:05:37.690239 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 09:05:49 crc kubenswrapper[4705]: I0124 09:05:49.577434 4705 scope.go:117] "RemoveContainer" containerID="731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3" Jan 24 09:05:49 crc kubenswrapper[4705]: E0124 09:05:49.578590 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 09:06:01 crc kubenswrapper[4705]: I0124 09:06:01.583473 4705 scope.go:117] "RemoveContainer" containerID="731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3" Jan 24 09:06:01 crc kubenswrapper[4705]: E0124 09:06:01.584260 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 09:06:13 crc kubenswrapper[4705]: I0124 09:06:13.576507 4705 scope.go:117] "RemoveContainer" containerID="731808473f8b10ab9daac20a4d992c20024626acd8f3e0e9bde4c3e06b064de3" Jan 24 09:06:13 crc kubenswrapper[4705]: E0124 09:06:13.577436 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dxqp2_openshift-machine-config-operator(a7b3b969-5164-4f10-8758-72b7e2f4b762)\"" pod="openshift-machine-config-operator/machine-config-daemon-dxqp2" podUID="a7b3b969-5164-4f10-8758-72b7e2f4b762" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.692382 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hj52m"] Jan 24 09:06:20 crc kubenswrapper[4705]: E0124 09:06:20.693781 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="registry-server" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.693804 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="registry-server" Jan 24 09:06:20 crc kubenswrapper[4705]: E0124 09:06:20.693868 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="extract-content" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.693907 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="extract-content" Jan 24 09:06:20 crc kubenswrapper[4705]: E0124 09:06:20.693942 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="extract-utilities" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.693955 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="extract-utilities" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.694397 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbcc45f-578c-43e4-9351-4b30d72d28f9" containerName="registry-server" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.697124 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.703546 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hj52m"] Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.883323 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4df498-a7b9-4773-927b-6b1c037c55eb-catalog-content\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.883383 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4df498-a7b9-4773-927b-6b1c037c55eb-utilities\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.883876 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl84c\" (UniqueName: \"kubernetes.io/projected/7c4df498-a7b9-4773-927b-6b1c037c55eb-kube-api-access-tl84c\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.986506 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4df498-a7b9-4773-927b-6b1c037c55eb-catalog-content\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.986959 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4df498-a7b9-4773-927b-6b1c037c55eb-utilities\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.987126 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl84c\" (UniqueName: \"kubernetes.io/projected/7c4df498-a7b9-4773-927b-6b1c037c55eb-kube-api-access-tl84c\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.987259 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4df498-a7b9-4773-927b-6b1c037c55eb-catalog-content\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:20 crc kubenswrapper[4705]: I0124 09:06:20.987467 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4df498-a7b9-4773-927b-6b1c037c55eb-utilities\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:21 crc kubenswrapper[4705]: I0124 09:06:21.021795 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl84c\" (UniqueName: \"kubernetes.io/projected/7c4df498-a7b9-4773-927b-6b1c037c55eb-kube-api-access-tl84c\") pod \"redhat-operators-hj52m\" (UID: \"7c4df498-a7b9-4773-927b-6b1c037c55eb\") " pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:21 crc kubenswrapper[4705]: I0124 09:06:21.026012 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hj52m" Jan 24 09:06:21 crc kubenswrapper[4705]: I0124 09:06:21.783316 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hj52m"]